You have a strange understanding of morality. Morality isn’t merely “common intuition”, as I’d venture to claim that most people have little intuition that could be used to create complex moral societies.
You claim that morality is a balance of things. For example, though taxation is the involuntary taking of other people’s stuff, if it has some good consequences then we ought to not say it’s theft.
Do we, though, agree to the rape of a woman if said rape results in the feeding of 10 starving children in Africa? Of course not. Morality is not about a balance of things—it is a set of rules to be followed. If we agree that theft is legally unethical, then taxation is also legally unethical, regardless of your alleged benefits. If, on the other hand, we accept taxation, we must necessarily decide that theft is not legally unethical and the whole concept of property goes out the window. Yes, the existence of taxation is an explicit repudiation of property.
Morality is not about a balance of things—it is a set of rules to be followed.
This is a claim that consequentialism is incorrect and deontology is correct. It’s insufficient to merely make this claim—you have to actually argue for it.
(The prevailing view around here is consequentialism, although if I recall correctly we have at least one deontologist and one virtue ethicist among the long-time members.)
I asked Alicorn for an intro to virtue ethics appropriate for Less Wrongers or toddlers and she said to ask you. The Wikipedia article on virtue ethics explains deontology and consequentialism but not virtue ethics. The Stanford Encyclopedia article is better but unclear on what virtues are or do. Where’s Virtue Ethics 101?
I think those are actually pretty good intros, and I’m not aware of another one that’s available online. Virtue Ethics for Consequentialists here on Less Wrong is pretty good. That said, I can provide a short summary.
I’ve noticed at least 3 things called ‘virtue ethics’ in the wild, which are generally mashed together willy-nilly (I’m not sure if anyone has pointed this out in the literature yet, and virtue ethicists seem to often believe all 3):
an empirical claim, that humans generally act according to habits of action and doing good things makes one more likely to do good things in the future, even in other domains
the notion that ethics is about being a good person and living a good life, instead of whether a particular action is permissible or leads to a good outcome
virtue as an achievement; a string of good actions can be characterized after the fact as virtuous, and that demonstrates the goodness of character.
A ‘virtue’ is a trait of character. It may be a “habit of action”. A virtue is good for the person who has it. A good human has many virtues.
Traditionally, it is thought that any good can be taken in the right amount, too much, or too little, and virtue is the state of having the proper amount of a good, or concern for a good. Vice is failing at virtue for that good. So “Courage” is the proper amount of concern for one’s physical well-being in the face of danger. “Cowardice” is too much concern, and “Rashness” is too little. For fun, I’ve found a table of virtues and vices from Aristotle here.
‘Virtue’ is also used to describe inanimate objects; they are simply the properties of the object that make it a good member of its class. For example, a good sword is sharp. Whether “Sharpness” is a good or a virtue for swords is left as an exercise to the reader.
Thanks! How do you pick virtues, though? (A virtue is what for the person who has it?) And how do you know when you have the proper amount of courage? And what’s wrong with never getting angry anyway?
If you go by the traditional view, any good (or category of goods) has an associated virtue. So one could own a virtuous amount of horses, but just how many that is probably depends on many accidental features. Virtues are subject-sensitive; I might need more horses than you do. But something specific like “proper concern for the amount of horses you have” is usually covered under a more general virtue, like Temperance. (Of course, “Temperance” is about as helpful a description of virtue, under this model, as “non sequitur” is as a description of a logical fallcy).
A more modern approach is use an exemplar of virtue, a paragon of human goodness, and inspect what qualities they have. You could construct a massive database of biographical information and then use machine learning techniques to identify clusters. This of course could lead to cargo-cult behavior.
And how do you know when you have the proper amount of courage?
Courage is always the proper amount, by definition, on the common view. Whether you have a particular virtue is a hard question, and I don’t think anyone has proposed a systematic way of assessing these things. I think the usual advice is to be wise, and then you’ll know.
And what’s wrong with never getting angry anyway?
That’s an empirical question. If getting angry is just bad, then it’s not a good and doesn’t have an associated virtue. I’d guess that for humans, there are a lot of situations that you’ll handle better in practice if you get angry. Anger does seem to motivate social reforms and such. Remember that virtues are defined with respect to a subject, and if human nature changes then so might the virtues.
Of course, I’ve answered these with my Philosopher’s hat on, and so I’ve made the questions harder rather than easier. I might reflect on this and give more practical answers later.
Okay, so to know what virtues are, you need to know what things are good to have in the first place. So its use is not to figure out what you care about, it’s to remind yourself you care about it. Like, a great swordsman insults you, you’re afraid but you remember that courage and pride are virtues, so you challenge him to a duel and get killed, all’s fine. But you can’t actually do that, because then you remember that rashness and vanity are vices, and you need to figure out on which side the duel falls. How is any of this virtuous mess supposed to help at all?
The theory is an attempt to explain the content of ethics. I’m not sure it’s any use to “remind yourself you care about it”.
In general, one should proceed along established habits of action—we do not have the time to deliberate about every decision we make. According to virtue ethics, one should try to cultivate the virtues so that those established habits are good habits.
Suppose you’re sitting down in your chair and you decide you’d like a beer from the refrigerator. But you need to pick a good path to the refrigerator! The utilitarian might say that you should evaluate every possible path, weigh them, and pick the one with the highest overall net utility. The Kantian might say that you should pick a path using a maxim that is universalizable—be sure not to cause any logical contradictions in your path-choosing. The virtue ethicist would suggest taking your habitual path to the refrigerator, with the caveat that you should in general try to develop a habit of taking virtuous paths.
And importantly, the previous paragraph was not an analogy, it was an example.
There is virtue theory in utilitarianism, that works out very similarly, yes. Note that “rule utilitarianism” usually refers to an ethical system in which following rules is valued for itself—I forget the name for the view amongst utilitarians that following rules is high-utility, which is what I think you mean to refer to.
What I’m thinking of is the theory that instead of trying to take the highest utility action at any given point, you should try to follow the highest utility rules that have been reflectively decided upon. ie, instead of deciding whether to kill someone, you just follow the rule “do not kill except in defense” or something along those lines.
Leaving aside the differences in moral justification, virtue ethics differs from rule utilitarianism in the practical sense that virtues tend to be more abstract than rules. For example, rather than avoiding unnecessary killing, becoming a kind person.
Right. But that’s a guide to action, not a description of the good (which utilitarianism purports to be). The utilitarian would justify that course of action with reference to its leading to higher expected utility. If the empirical facts about humanity were such that it is more efficient for us to calculate expected utility for every action individually, then those folks would not advocate following rules, while “rule utilitarians” still would.
I think a rule utilitarian might say that I should evaluate various algorithms for selecting a path and adopt the algorithm that will in general cause me to select paths with the highest overall net utility. Which, yes, is similar to the virtue ethicist (as described here) in that they are both concerned with selecting mechanisms for selecting paths, rather than with selecting paths… but different insofar as “virtuous” != “having high net utility”.
Aristotle argued that you don’t even know whether someone lived a good life until after they died and you have time to reflect on their life and achievements, and even then I think he was going by “I’ll know it when I see it”.
I’m under the impression that Aristotle argues the very opposite (in NE I.10, for example). Can you cite a passage for me?
Yeah, I think I was remembering his intermediate conclusions more than the final one. I was just about to cite that passage before realizing it’s the one you meant.
This is an assertion, not an argument. Why is morality about rules, not conseqeuences?
I don’t actually understand what people mean when they say in principle it’s the rules which matter, not the balance of the good and bad consequences which occur. If consequences were unimportant, why have the rules that we have? Surely you agree that proscriptions against rape, murder, theft, torture, arson, etc all have the common thread of not causing undue suffering to another person?
I can understand (and in most cases accept) the argument that human beings are too flawed to figure out and understand the consequences. Therefore, in most cases we should stick to tried and tested rules which have reduced suffering and created peaceful societies in the past and shut down the cognitive processes which say, “But maybe I could murder the leader and seize power just this once if the whole group will benefit....”
But I can’t see how the point of morality is rules. If that’s the case, why are the rules not completely random? Why is morality not fashion?
By the way, 10 people is probably too low a number for me to sacrifice myself, especially given that I can just donate a large portion of my income to save 1000′s of lives. But if in some bizarre world, the only way for me to save X people was to be subjected to rape (I’m female, BTW), for sufficiently high values of X I should damn well hope I’ll step up. (And I’m not proud that 1 or 2 or 10 probably wouldn’t do it for me. I’m selfish, and I am what I am, but this is not my ideal self.)
The one who offered this sadistic choice is of course evil, because ze could have fed the starving kids without raping me, thus creating the maximum well-being in zir capacity. (Knock-on effects of encouraging sadistic rapists should be factored into the consequential calculation, but I have no problem treating a hypothetical as pure and simple.)
I’m not actually knocking rules here; I think we run on corrupted hardware and in our personal lives we should follow rules quite strictly. I’m just saying that the rules should be (and are) derived from the consequences of those actions.
If consequences were unimportant, why have the rules that we have? Surely you agree that proscriptions against rape, murder, theft, torture, arson, etc all have the common thread of not causing undue suffering to another person?
To play the devil’s advocate (I am not a deontologist myself), the converse question, i.e. why care about the consequences we care about is about as legitimate as yours. It is not entirely unimaginable for a person to have a strong instinctive aversion towards murder while caring much less (or not at all) about its consequences. Many people indeed reveal such preferences by voting for inaction in the Trolley Problem or by ascribing to Rand’s Objectivism. You seem to think that those people are in error, actually having derived their deontological preferences from harm minimisation and then forgetting that the rules aren’t primary—but isn’t it at least possible that their preferences are genuine?
It’s hard for me to say when and whether other people are in error, especially moral error. I don’t deny that it’s possible people have a strong aversion to murder while not caring about the consequences. In fact, in terms of genetic fitness, going out of your way to avoid being the one who personally stabs the other guy while not caring much whether he gets stabbed would have helped you avoid both punishment and risk.
But from my observations, most people are upset when others suffer and die. This tells me most of us do care, though it doesn’t tell me how much. I don’t actually rail against people who care less than I do; as a consequentialist one of the problems I need to solve is incentivizing people to help even if they only care a little bit.
Caring is like activation energy in a chemical reaction; it has to get to a certain point before help is forthcoming. We can try to raise people’s levels of caring, which is usually exhausting and almost always temporary, or we can make helping easier and more effective, and watch what happens then. If it becomes more forthcoming, we can believe that consequences and cost-benefit balances do matter to some degree.
This was a circuitous answer, I know. My reply to you is basically, “Yes, it’s possible, but people don’t behave as if they literally care nothing for consequences to other people’s well being.”
I can’t but agree with all you have written, but I have the feeling that we are now discussing a question slightly different from the original one: “how the point of morality is rules?” People indeed don’t behave as if they literally care nothing for consequences to other people’s well being, but many people behave as if, in certain situations, the consequences are less important than the rules. Often it is possible to persuade them to accept the consequentialist viewpoint by abstract argument—more often than it is possible to convert a consequentialist to deontology by abstract argument—but that only shows consequentialism is more consistent with abstract thinking. But there are situations, like the Trolley Problem, where even many the self-identified consequentialists choose to prefer rules over consequences, even if it necessitates heavy rationalisation and/or fighting the hypothetical.
It seems natural to conclude that for many people, although the rules aren’t the point of morality, they are certainly one of the points and stand independently of another point, which are the consequences. Perhaps it isn’t a helpful answer if you want to understand, on the level of gut feelings, how the rules can trump solid consequentialist reasoning even in absence of uncertainty and bias, if your own deontologist intuitions are very weak. But at least it should be clear that the answer to the question you have asked in your topmost comment,
[if the point of morality is rules] why are the rules not completely random?
has something to do with our evolved intuitions. And even if you disagree with that, I hope you agree that whatever the answer is, it would not change much if in the conditional we replace “rules” by “consequences”.
It seems natural to conclude that for many people, although the rules aren’t the point of morality, they are certainly one of the points and stand independently of another point, which are the consequences.
I agree with you there. But even though people seem to care about both rules and consequences, as separate categories in their mental conceptions of morality, it does seem as if the rules have a recurring pattern of bringing about or preventing certain particular consequences. Our evolved instincts make us prone to following certain rules, and they make us prone to desiring certain outcomes. Many of us think the rules should trump the desired outcomes—but the rules themselves line up with desired outcomes most of the time. Moral dilemmas are just descriptions of those rare situations when following the rule won’t lead to the desired outcome.
Hm, I used the local vernacular in favor of explaining myself more clearly. You make a valid point.
How about this: Our brain was not created in one shot. New adaptations were layered over more primitive ones. The neocortex and various other recent adaptations, which arose back when the homo genus came into being, are most likely what give me the thing I call “consciousness.” The cluster of recently adapted conscious modules make up the voice in my head which narrates my thoughts. I restrict my definition of “I” to this “conscious software.” This conscious “I” has absorbed various values which augment the limited natural empathy and altruism which was beneficial to my ancestors. Obviously, “I” only care about “me.”
But the voice which narrates my thoughts does not always determine the actions my body performs. More ancient urges like sex, survival, and self-interest most often prevail when I try to break too far out of my programming by trying too hard to follow my verbal values to their fullest extent.
But these ancient functions don’t exactly get a say when I’m thinking my thoughts and determining my values. So, from the perspective of my conscious, far-mode modules, which have certain values like “I should treat people equally,” “I should be honest,” and “My values should be self-consistent and complete,” older modules are often trying to thwart me.
This relates to moral dilemmas because when the I in my brain is trying to honestly and accurately calculate what the best course of action would be, selfishness and power-grabbing instincts can sneak in and wordlessly steer my decisions so the “best” course of action “coincidentally” ends up with me somehow getting a lot of money and power.
Thanks for the explanation. Do you intend terms like ‘software’ and ‘hardware’ and ‘programming’ to be metaphorical?
But the voice which narrates my thoughts does not always determine the actions my body performs. More ancient urges like sex, survival, and self-interest most often prevail when I try to break too far out of my programming by trying too hard to follow my verbal values to their fullest extent.
If some primitive impulse overrides your conscious deliberation, why do we call that an ‘action’ at all? We don’t think of reflexes as actions, for example, at least not in any sense to which responsibility is relevant.
Do you intend terms like ‘software’ and ‘hardware’ and ‘programming’ to be metaphorical?
Yeah. I borrowed my vocabulary for discussing this kind of thing from a community dominated by programmers, and I myself am a pretty math-y kind of person. :)
If some primitive impulse overrides your conscious deliberation, why do we call that an ‘action’ at all? We don’t think of reflexes as actions, for example, at least not in any sense to which responsibility is relevant.
In the end, I feel responsible for the actions of my body caused by selfish impulses, even if I don’t verbally approve of them. And society holds me responsible, too. Regardless of whether it’s fair, I have to work in a world where I’m expected to control my brain.
Besides, I am smarter than my brain, after all. There are limits to how much I can exert conscious control over ancient motivations—but as far as I’m concerned, it’s totally fair to criticize me for not doing my absolute best to reach that limit.
For example, the brain is a creature of habit, and because I haven’t started my independent life yet, I’m in the perfect position to adopt habits that will improve the world optimally. I can plan ahead of time to only spend up to a certain dollar amount on myself and my friends/family (based on happiness research, knowledge of my own needs, etc) and throw any and all surplus income into an “optimal philanthropy” bucket which must be donated. My monkey brain will just think of that money as “unavailable” and donate out of habit, allowing me to maximize my impact while minimizing difficulty for myself. (Thinking of meat as “just unavailable” is how I and most other vegetarians organize our diets without stress.)
I know I can do this, the science backs me up; if I do not, and succumb to selfish impulses anyway, that’s definitely my fault. I have the opportunity to plan ahead and manipulate my brain; if my values are to be self-consistent, I must take it.
Thanks, by the way, for indulging my question and elaborating on something tangential to your point.
Besides, I am smarter than my brain, after all.
This is similar to the ‘corrupted hardware’ claim insofar as both seem to me to be in tension with the software/hardware metaphor: if your brain is your hardware, and your rational deliberation and reflection is software, then it doesn’t make sense to say that the brain isn’t as smart as you (the software) are. It wouldn’t make sense to say of hardware that it doesn’t [sufficiently] perform the functions of software. Hardware and software do different things.
So it has to be that you have two different sets of software. A native software that your brain is running all the time and which is selfish and uncontrolled, alongside an engendered software which is rational and with which you self-identify. If the brain is corrupted, it’s not in its distinctive functions, but just in the fact that it has this native software that you can’t entirely control and can’t get rid of.
But that still seems off to me. We can’t really call the brain ‘corrupted hardware’ because we have no idea what non-corrupted hardware would even look like. At the moment, general intelligence is only possible on one kind of hardware: ours. So as far as we know, the hacked together mess that is the human brain is actually what general intelligence requires. Likewise, the non-rational software apparently doesn’t stand in relation to the rational software as an alien competitor. The non-rational stuff and the rational stuff seem to be joined everywhere, and it’s not at all clear that the rational stuff even works without the rest of it.
Well, when metaphors break, I say just toss ’em. It’s not exactly like the distinction between hardware and software; your new metaphor makes a little bit more sense in terms of what we’re discussing now, but in the end, the brain is only completely like the brain.
We could think of it this way: the brain is like a computer with an awful user interface, which forces us to constantly run a whole lot of programs which we don’t necessarily want and can’t actually read or control. It also has a little bit of processing power left for us to install other applications. The only thing we actually like about our computer is the applications we chose to put in, even though not having the computer at all would mean we had no way to run them.
I was not being 100% serious when I said I was smarter than my brain; it was sort of intended to illustrate the weird tension I have: all that I am is contained in my brain, but not all of my brain is who I am.
So as far as we know, the hacked together mess that is the human brain is actually what general intelligence requires.
This hacked-together brain results in some general intelligence; it’s highly unlikely that it’s optimized for general intelligence, that we can’t, even in theory, imagine a better substrate for it. In short, “corrupted hardware” means “my physical brain is not optimized for the things my conscious mind values.”
But I can’t see how the point of morality is rules. If that’s the case, why are the rules not completely random? Why is morality not fashion?
My understanding of the work of Haidt is that much of morality is pattern matching on behavior and not just outcomes, and that’s what I would expect to see in evolved social creatures.
Very few people treat morality this way. Many people, if asked, will say that it’s moral to follow the bible’s teachings, and yet do not stone women to death for wearing pants or men for wearing skirts. Clearly they are following some sort of internal system by which there are different concerns balanced against each other in their moral decisions.
You claim that morality is a balance of things. For example, though taxation is the involuntary taking of other people’s stuff, if it has some good consequences then we ought to not say it’s theft.
No, the claim is that taxation is theft if we define that way, but we should look more closely to see whether the theft is justified anyway, even if theft is usually bad.
Morality is not about a balance of things—it is a set of rules to be followed.
Many (most?) people here disagree. What happens when the rules conflict? Then you’ve got to weigh the balance.
You have a strange understanding of morality. Morality isn’t merely “common intuition”, as I’d venture to claim that most people have little intuition that could be used to create complex moral societies.
You claim that morality is a balance of things. For example, though taxation is the involuntary taking of other people’s stuff, if it has some good consequences then we ought to not say it’s theft.
Do we, though, agree to the rape of a woman if said rape results in the feeding of 10 starving children in Africa? Of course not. Morality is not about a balance of things—it is a set of rules to be followed. If we agree that theft is legally unethical, then taxation is also legally unethical, regardless of your alleged benefits. If, on the other hand, we accept taxation, we must necessarily decide that theft is not legally unethical and the whole concept of property goes out the window. Yes, the existence of taxation is an explicit repudiation of property.
This is a claim that consequentialism is incorrect and deontology is correct. It’s insufficient to merely make this claim—you have to actually argue for it.
(The prevailing view around here is consequentialism, although if I recall correctly we have at least one deontologist and one virtue ethicist among the long-time members.)
Hallo.
Salut?
Yup.
I asked Alicorn for an intro to virtue ethics appropriate for Less Wrongers or toddlers and she said to ask you. The Wikipedia article on virtue ethics explains deontology and consequentialism but not virtue ethics. The Stanford Encyclopedia article is better but unclear on what virtues are or do. Where’s Virtue Ethics 101?
I think those are actually pretty good intros, and I’m not aware of another one that’s available online. Virtue Ethics for Consequentialists here on Less Wrong is pretty good. That said, I can provide a short summary.
I’ve noticed at least 3 things called ‘virtue ethics’ in the wild, which are generally mashed together willy-nilly (I’m not sure if anyone has pointed this out in the literature yet, and virtue ethicists seem to often believe all 3):
an empirical claim, that humans generally act according to habits of action and doing good things makes one more likely to do good things in the future, even in other domains
the notion that ethics is about being a good person and living a good life, instead of whether a particular action is permissible or leads to a good outcome
virtue as an achievement; a string of good actions can be characterized after the fact as virtuous, and that demonstrates the goodness of character.
A ‘virtue’ is a trait of character. It may be a “habit of action”. A virtue is good for the person who has it. A good human has many virtues.
Traditionally, it is thought that any good can be taken in the right amount, too much, or too little, and virtue is the state of having the proper amount of a good, or concern for a good. Vice is failing at virtue for that good. So “Courage” is the proper amount of concern for one’s physical well-being in the face of danger. “Cowardice” is too much concern, and “Rashness” is too little. For fun, I’ve found a table of virtues and vices from Aristotle here.
‘Virtue’ is also used to describe inanimate objects; they are simply the properties of the object that make it a good member of its class. For example, a good sword is sharp. Whether “Sharpness” is a good or a virtue for swords is left as an exercise to the reader.
Thanks! How do you pick virtues, though? (A virtue is what for the person who has it?) And how do you know when you have the proper amount of courage? And what’s wrong with never getting angry anyway?
If you go by the traditional view, any good (or category of goods) has an associated virtue. So one could own a virtuous amount of horses, but just how many that is probably depends on many accidental features. Virtues are subject-sensitive; I might need more horses than you do. But something specific like “proper concern for the amount of horses you have” is usually covered under a more general virtue, like Temperance. (Of course, “Temperance” is about as helpful a description of virtue, under this model, as “non sequitur” is as a description of a logical fallcy).
A more modern approach is use an exemplar of virtue, a paragon of human goodness, and inspect what qualities they have. You could construct a massive database of biographical information and then use machine learning techniques to identify clusters. This of course could lead to cargo-cult behavior.
Courage is always the proper amount, by definition, on the common view. Whether you have a particular virtue is a hard question, and I don’t think anyone has proposed a systematic way of assessing these things. I think the usual advice is to be wise, and then you’ll know.
That’s an empirical question. If getting angry is just bad, then it’s not a good and doesn’t have an associated virtue. I’d guess that for humans, there are a lot of situations that you’ll handle better in practice if you get angry. Anger does seem to motivate social reforms and such. Remember that virtues are defined with respect to a subject, and if human nature changes then so might the virtues.
Of course, I’ve answered these with my Philosopher’s hat on, and so I’ve made the questions harder rather than easier. I might reflect on this and give more practical answers later.
Okay, so to know what virtues are, you need to know what things are good to have in the first place. So its use is not to figure out what you care about, it’s to remind yourself you care about it. Like, a great swordsman insults you, you’re afraid but you remember that courage and pride are virtues, so you challenge him to a duel and get killed, all’s fine. But you can’t actually do that, because then you remember that rashness and vanity are vices, and you need to figure out on which side the duel falls. How is any of this virtuous mess supposed to help at all?
The theory is an attempt to explain the content of ethics. I’m not sure it’s any use to “remind yourself you care about it”.
In general, one should proceed along established habits of action—we do not have the time to deliberate about every decision we make. According to virtue ethics, one should try to cultivate the virtues so that those established habits are good habits.
Suppose you’re sitting down in your chair and you decide you’d like a beer from the refrigerator. But you need to pick a good path to the refrigerator! The utilitarian might say that you should evaluate every possible path, weigh them, and pick the one with the highest overall net utility. The Kantian might say that you should pick a path using a maxim that is universalizable—be sure not to cause any logical contradictions in your path-choosing. The virtue ethicist would suggest taking your habitual path to the refrigerator, with the caveat that you should in general try to develop a habit of taking virtuous paths.
And importantly, the previous paragraph was not an analogy, it was an example.
So the impression I get is that virture ethics is very similar to rule utilitarianism?
There is virtue theory in utilitarianism, that works out very similarly, yes. Note that “rule utilitarianism” usually refers to an ethical system in which following rules is valued for itself—I forget the name for the view amongst utilitarians that following rules is high-utility, which is what I think you mean to refer to.
What I’m thinking of is the theory that instead of trying to take the highest utility action at any given point, you should try to follow the highest utility rules that have been reflectively decided upon. ie, instead of deciding whether to kill someone, you just follow the rule “do not kill except in defense” or something along those lines.
Leaving aside the differences in moral justification, virtue ethics differs from rule utilitarianism in the practical sense that virtues tend to be more abstract than rules. For example, rather than avoiding unnecessary killing, becoming a kind person.
Well, “become a kind person” isn’t terribly useful instruction unless you already know what kind means to begin with.
Right. But that’s a guide to action, not a description of the good (which utilitarianism purports to be). The utilitarian would justify that course of action with reference to its leading to higher expected utility. If the empirical facts about humanity were such that it is more efficient for us to calculate expected utility for every action individually, then those folks would not advocate following rules, while “rule utilitarians” still would.
I think a rule utilitarian might say that I should evaluate various algorithms for selecting a path and adopt the algorithm that will in general cause me to select paths with the highest overall net utility. Which, yes, is similar to the virtue ethicist (as described here) in that they are both concerned with selecting mechanisms for selecting paths, rather than with selecting paths… but different insofar as “virtuous” != “having high net utility”.
I’m under the impression that Aristotle argues the very opposite (in NE I.10, for example). Can you cite a passage for me?
Yeah, I think I was remembering his intermediate conclusions more than the final one. I was just about to cite that passage before realizing it’s the one you meant.
This is an assertion, not an argument. Why is morality about rules, not conseqeuences?
I don’t actually understand what people mean when they say in principle it’s the rules which matter, not the balance of the good and bad consequences which occur. If consequences were unimportant, why have the rules that we have? Surely you agree that proscriptions against rape, murder, theft, torture, arson, etc all have the common thread of not causing undue suffering to another person?
I can understand (and in most cases accept) the argument that human beings are too flawed to figure out and understand the consequences. Therefore, in most cases we should stick to tried and tested rules which have reduced suffering and created peaceful societies in the past and shut down the cognitive processes which say, “But maybe I could murder the leader and seize power just this once if the whole group will benefit....”
But I can’t see how the point of morality is rules. If that’s the case, why are the rules not completely random? Why is morality not fashion?
By the way, 10 people is probably too low a number for me to sacrifice myself, especially given that I can just donate a large portion of my income to save 1000′s of lives. But if in some bizarre world, the only way for me to save X people was to be subjected to rape (I’m female, BTW), for sufficiently high values of X I should damn well hope I’ll step up. (And I’m not proud that 1 or 2 or 10 probably wouldn’t do it for me. I’m selfish, and I am what I am, but this is not my ideal self.)
The one who offered this sadistic choice is of course evil, because ze could have fed the starving kids without raping me, thus creating the maximum well-being in zir capacity. (Knock-on effects of encouraging sadistic rapists should be factored into the consequential calculation, but I have no problem treating a hypothetical as pure and simple.)
I’m not actually knocking rules here; I think we run on corrupted hardware and in our personal lives we should follow rules quite strictly. I’m just saying that the rules should be (and are) derived from the consequences of those actions.
To play the devil’s advocate (I am not a deontologist myself), the converse question, i.e. why care about the consequences we care about is about as legitimate as yours. It is not entirely unimaginable for a person to have a strong instinctive aversion towards murder while caring much less (or not at all) about its consequences. Many people indeed reveal such preferences by voting for inaction in the Trolley Problem or by ascribing to Rand’s Objectivism. You seem to think that those people are in error, actually having derived their deontological preferences from harm minimisation and then forgetting that the rules aren’t primary—but isn’t it at least possible that their preferences are genuine?
It’s hard for me to say when and whether other people are in error, especially moral error. I don’t deny that it’s possible people have a strong aversion to murder while not caring about the consequences. In fact, in terms of genetic fitness, going out of your way to avoid being the one who personally stabs the other guy while not caring much whether he gets stabbed would have helped you avoid both punishment and risk.
But from my observations, most people are upset when others suffer and die. This tells me most of us do care, though it doesn’t tell me how much. I don’t actually rail against people who care less than I do; as a consequentialist one of the problems I need to solve is incentivizing people to help even if they only care a little bit.
Caring is like activation energy in a chemical reaction; it has to get to a certain point before help is forthcoming. We can try to raise people’s levels of caring, which is usually exhausting and almost always temporary, or we can make helping easier and more effective, and watch what happens then. If it becomes more forthcoming, we can believe that consequences and cost-benefit balances do matter to some degree.
This was a circuitous answer, I know. My reply to you is basically, “Yes, it’s possible, but people don’t behave as if they literally care nothing for consequences to other people’s well being.”
I can’t but agree with all you have written, but I have the feeling that we are now discussing a question slightly different from the original one: “how the point of morality is rules?” People indeed don’t behave as if they literally care nothing for consequences to other people’s well being, but many people behave as if, in certain situations, the consequences are less important than the rules. Often it is possible to persuade them to accept the consequentialist viewpoint by abstract argument—more often than it is possible to convert a consequentialist to deontology by abstract argument—but that only shows consequentialism is more consistent with abstract thinking. But there are situations, like the Trolley Problem, where even many the self-identified consequentialists choose to prefer rules over consequences, even if it necessitates heavy rationalisation and/or fighting the hypothetical.
It seems natural to conclude that for many people, although the rules aren’t the point of morality, they are certainly one of the points and stand independently of another point, which are the consequences. Perhaps it isn’t a helpful answer if you want to understand, on the level of gut feelings, how the rules can trump solid consequentialist reasoning even in absence of uncertainty and bias, if your own deontologist intuitions are very weak. But at least it should be clear that the answer to the question you have asked in your topmost comment,
has something to do with our evolved intuitions. And even if you disagree with that, I hope you agree that whatever the answer is, it would not change much if in the conditional we replace “rules” by “consequences”.
I agree with you there. But even though people seem to care about both rules and consequences, as separate categories in their mental conceptions of morality, it does seem as if the rules have a recurring pattern of bringing about or preventing certain particular consequences. Our evolved instincts make us prone to following certain rules, and they make us prone to desiring certain outcomes. Many of us think the rules should trump the desired outcomes—but the rules themselves line up with desired outcomes most of the time. Moral dilemmas are just descriptions of those rare situations when following the rule won’t lead to the desired outcome.
Compared to what? Or corrupted from what more functional state?
Hm, I used the local vernacular in favor of explaining myself more clearly. You make a valid point.
How about this: Our brain was not created in one shot. New adaptations were layered over more primitive ones. The neocortex and various other recent adaptations, which arose back when the homo genus came into being, are most likely what give me the thing I call “consciousness.” The cluster of recently adapted conscious modules make up the voice in my head which narrates my thoughts. I restrict my definition of “I” to this “conscious software.” This conscious “I” has absorbed various values which augment the limited natural empathy and altruism which was beneficial to my ancestors. Obviously, “I” only care about “me.”
But the voice which narrates my thoughts does not always determine the actions my body performs. More ancient urges like sex, survival, and self-interest most often prevail when I try to break too far out of my programming by trying too hard to follow my verbal values to their fullest extent.
But these ancient functions don’t exactly get a say when I’m thinking my thoughts and determining my values. So, from the perspective of my conscious, far-mode modules, which have certain values like “I should treat people equally,” “I should be honest,” and “My values should be self-consistent and complete,” older modules are often trying to thwart me.
This relates to moral dilemmas because when the I in my brain is trying to honestly and accurately calculate what the best course of action would be, selfishness and power-grabbing instincts can sneak in and wordlessly steer my decisions so the “best” course of action “coincidentally” ends up with me somehow getting a lot of money and power.
This is what I meant when I used the shorthand.
Thanks for the explanation. Do you intend terms like ‘software’ and ‘hardware’ and ‘programming’ to be metaphorical?
If some primitive impulse overrides your conscious deliberation, why do we call that an ‘action’ at all? We don’t think of reflexes as actions, for example, at least not in any sense to which responsibility is relevant.
Yeah. I borrowed my vocabulary for discussing this kind of thing from a community dominated by programmers, and I myself am a pretty math-y kind of person. :)
In the end, I feel responsible for the actions of my body caused by selfish impulses, even if I don’t verbally approve of them. And society holds me responsible, too. Regardless of whether it’s fair, I have to work in a world where I’m expected to control my brain.
Besides, I am smarter than my brain, after all. There are limits to how much I can exert conscious control over ancient motivations—but as far as I’m concerned, it’s totally fair to criticize me for not doing my absolute best to reach that limit.
For example, the brain is a creature of habit, and because I haven’t started my independent life yet, I’m in the perfect position to adopt habits that will improve the world optimally. I can plan ahead of time to only spend up to a certain dollar amount on myself and my friends/family (based on happiness research, knowledge of my own needs, etc) and throw any and all surplus income into an “optimal philanthropy” bucket which must be donated. My monkey brain will just think of that money as “unavailable” and donate out of habit, allowing me to maximize my impact while minimizing difficulty for myself. (Thinking of meat as “just unavailable” is how I and most other vegetarians organize our diets without stress.)
I know I can do this, the science backs me up; if I do not, and succumb to selfish impulses anyway, that’s definitely my fault. I have the opportunity to plan ahead and manipulate my brain; if my values are to be self-consistent, I must take it.
Thanks, by the way, for indulging my question and elaborating on something tangential to your point.
This is similar to the ‘corrupted hardware’ claim insofar as both seem to me to be in tension with the software/hardware metaphor: if your brain is your hardware, and your rational deliberation and reflection is software, then it doesn’t make sense to say that the brain isn’t as smart as you (the software) are. It wouldn’t make sense to say of hardware that it doesn’t [sufficiently] perform the functions of software. Hardware and software do different things.
So it has to be that you have two different sets of software. A native software that your brain is running all the time and which is selfish and uncontrolled, alongside an engendered software which is rational and with which you self-identify. If the brain is corrupted, it’s not in its distinctive functions, but just in the fact that it has this native software that you can’t entirely control and can’t get rid of.
But that still seems off to me. We can’t really call the brain ‘corrupted hardware’ because we have no idea what non-corrupted hardware would even look like. At the moment, general intelligence is only possible on one kind of hardware: ours. So as far as we know, the hacked together mess that is the human brain is actually what general intelligence requires. Likewise, the non-rational software apparently doesn’t stand in relation to the rational software as an alien competitor. The non-rational stuff and the rational stuff seem to be joined everywhere, and it’s not at all clear that the rational stuff even works without the rest of it.
Well, when metaphors break, I say just toss ’em. It’s not exactly like the distinction between hardware and software; your new metaphor makes a little bit more sense in terms of what we’re discussing now, but in the end, the brain is only completely like the brain.
We could think of it this way: the brain is like a computer with an awful user interface, which forces us to constantly run a whole lot of programs which we don’t necessarily want and can’t actually read or control. It also has a little bit of processing power left for us to install other applications. The only thing we actually like about our computer is the applications we chose to put in, even though not having the computer at all would mean we had no way to run them.
I was not being 100% serious when I said I was smarter than my brain; it was sort of intended to illustrate the weird tension I have: all that I am is contained in my brain, but not all of my brain is who I am.
This hacked-together brain results in some general intelligence; it’s highly unlikely that it’s optimized for general intelligence, that we can’t, even in theory, imagine a better substrate for it. In short, “corrupted hardware” means “my physical brain is not optimized for the things my conscious mind values.”
Point taken, and you’re probably right about the optimization thing. Thanks for taking the time to explain.
You’re welcome! :) Thank you for forcing me to think more precisely about this.
Wiki: Corrupted hardware
I think my questions (idle though they may be) stand.
My understanding of the work of Haidt is that much of morality is pattern matching on behavior and not just outcomes, and that’s what I would expect to see in evolved social creatures.
When arguing with consequentialists, you may find it useful to use larger numbers. I recommend Graham’s number.
Very few people treat morality this way. Many people, if asked, will say that it’s moral to follow the bible’s teachings, and yet do not stone women to death for wearing pants or men for wearing skirts. Clearly they are following some sort of internal system by which there are different concerns balanced against each other in their moral decisions.
I’m too selfish to put myself at risk of retaliation for the sake of only 10 children, so no. Also, that is a really strange scenario.
No, the claim is that taxation is theft if we define that way, but we should look more closely to see whether the theft is justified anyway, even if theft is usually bad.
Many (most?) people here disagree. What happens when the rules conflict? Then you’ve got to weigh the balance.