Be comfortable with hypocrisy
Neal Stephenson’s The Diamond Age takes place several decades in the future and this conversation is looking back on the present day:
“You know, when I was a young man, hypocrisy was deemed the worst of vices,” Finkle-McGraw said. “It was all because of moral relativism. You see, in that sort of a climate, you are not allowed to criticise others-after all, if there is no absolute right and wrong, then what grounds is there for criticism?” [...]
“Now, this led to a good deal of general frustration, for people are naturally censorious and love nothing better than to criticise others’ shortcomings. And so it was that they seized on hypocrisy and elevated it from a ubiquitous peccadillo into the monarch of all vices. For, you see, even if there is no right and wrong, you can find grounds to criticise another person by contrasting what he has espoused with what he has actually done. In this case, you are not making any judgment whatsoever as to the correctness of his views or the morality of his behaviour-you are merely pointing out that he has said one thing and done another. Virtually all political discourse in the days of my youth was devoted to the ferreting out of hypocrisy.” [...]
“We take a somewhat different view of hypocrisy,” Finkle-McGraw continued. “In the late-twentieth-century Weltanschauung, a hypocrite was someone who espoused high moral views as part of a planned campaign of deception-he never held these beliefs sincerely and routinely violated them in privacy. Of course, most hypocrites are not like that. Most of the time it’s a spirit-is-willing, flesh-is-weak sort of thing.”
“That we occasionally violate our own stated moral code,” Major Napier said, working it through, “does not imply that we are insincere in espousing that code.”
I’m not sure if I agree with this characterization of the current political climate; in any case, that’s not the point I’m interested in. I’m also not interested in moral relativism.
But the passage does point out a flaw which I recognize in myself: a preference for consistency over actually doing the right thing. I place a lot of stock—as I think many here do—on self-consistency. After all, clearly any moral code which is inconsistent is wrong. But dismissing a moral code for inconsistency or a person for hypocrisy is lazy. Morality is hard. It’s easy to get a warm glow from the nice self-consistency of your own principles and mistake this for actually being right.
Placing too much emphasis on consistency led me to at least one embarrassing failure. I decided that no one who ate meat could be taken seriously when discussing animal rights: killing animals because they taste good seems completely inconsistent with placing any value on their lives. Furthermore, I myself ignored the whole concept of animal rights because I eat meat, so that it would be inconsistent for me to assign animals any rights. Consistency between my moral principles and my actions—not being a hypocrite—was more important to me than actually figuring out what the correct moral principles were.
To generalize: holding high moral ideals is going to produce cognitive dissonance when you are not able to live up to those ideals. It is always tempting—for me at least—to resolve this dissonance by backing down from those high ideals. An alternative we might try is to be more comfortable with hypocrisy.
- 13 May 2021 14:49 UTC; 17 points) 's comment on The case for hypocrisy by (
My personal examples of hypocrisy:
I believe it is ethically better to be a vegetarian, or even better to be a vegan. Yet I am not a vegetarian, and to be honest, it’s not even because I would love eating meat to much (veganism would be more difficult, because I love cheese), but merely because it would be inconvenient. If I had a vegan restaurant near my job, and enough practice with cooking different vegetarian meals, I probably wouldn’t mind being vegetarian; it wouldn’t even seem like having sacrificed anything.
(Okay, I took some steps to fix this, but this is not meant to be a thread about making excuses, it’s about admitting hypocrisy.)
I believe I should spend less time on internet, because my spending too much time online is probably the worst obstacle at reaching many of my goals. Guess what I am doing right now?
I’ve been in that spot for a long time and my excuse always was that vegetarianism would be too inconvenient.
Around the end of last year it finally clicked. The inconvenience excuse is plainly wrong in many cases AND being a vegetarian in just these cases is still a good thing!
I resolved to eat vegetarian whenever it is not inconvenient. This turned out to be almost always. Especially easy are restaurants and ordered food. When in a supermarket I never buy meat which automatically sets me up for lots of vegetarian meals.
I’m currently eating vegetarian on ~95% of my meals. As a bonus I don’t have a bad conscience in the few cases where I eat meat.
Actually, I just did something similar.
I already keep daily logs about what I did during the day (e.g. if I had an exercise), so I added not eating meat to the list of recorded variables. This step was trivial. But now, whenever I am choosing a meal, I remember that I will have to put it on the record. And somehow this trivial inconvenience makes me choose the vegetarian meal more reliably than previously.
I started recording this two weeks ago. First week there was no difference, but the second week I ate vegetarian food for the whole week. I only broke the chain today, because we already had some pre-made soup containing meat at home.
Anecdote: I was in your position at the start of 2013. I tried pescatarianism for a while and found it to be much easier than I expected; I transitioned to full vegatarianism a bit later and have found it surprisingly easy to maintain since. And I’m usually a pretty impulsive person, especially around food!
Surprise upside: Reduced decision fatigue, especially at restaurants.
Disclaimer: Typical mind fallacy, also I live in a very urban area with a higher-than-average density of vegetarians and vegetarian-options.
I’m hesitant at the idea of pescetarianism. If larger animals are more sentient, then fish don’t matter much, and that’s a good idea. But some people argue otherwise. If there’s a significant chance that fish are as sentient as anything else, eating only fish is a terrible idea.
While I was adjusting to vegetarianism, I tried to eat mid-sized animals, so it wouldn’t be terrible either way.
Perhaps beef-only could be an alternative to fish-only as a stepping stone? I think I remember beef getting better ratings than most other things in some articles comparing utility of eating animals with the assumption that all creatures suffer equally. (Pigs are smarter than cows, I think, but I’m not sure if you were considering them mid-size or large.)
I figure it would be more brain size than intelligence. I would expect pain would increase by having more neurons to feel it, not having a deeper understanding of what it is.
Beef-only has the opposite problem. If not all creatures suffer equally, avoiding cows would be most important.
Ok, this makes sense.
Also, I found the article. If those numbers are somewhat correct, maybe add an adjustment for brain mass to it? But just from rough guesses at the affect that would have, I see why going for medium-size animals makes sense.
I’m a vegetarian and feel hypocritical because my principles imply that I should be a vegan.
Same here. :(
Seeing an ideal not being met is not an argument against the ideal.
Hypocrisy is always evidence. Often, it is evidence about the hypocrite. It may be evidence of a weak will, a misguided belief, a misunderstood belief, or even deliberately disguised intentions. It all depends on the person and the act.
Sometimes, hypocrisy is evidence about an ideal itself, especially if many holders of the ideal also practice the hypocrisy. Then, you might start to link the hypocrisy to the ideal, perhaps as a correlated phenomena, perhaps as an effect of some specific phenomena or axiom of the belief. It, again, all depends.
Of course, like any error, hypocrisy is a sign that something, somewhere, is not optimized. It may be as simple as “I just COULDN’T resist that steak when I saw it sizzling” or it might be more systemic. But hypocrisy does not automatically destroy an ideal. It is evidence and it is up to us to decide what needs to be fixed. Do I need stronger will power? Do I have personal beliefs I profess not to hold but continue to act upon? Do I have a belief about my ideal that is in error? Or, finally, is the ideal itself in error? We have to figure it out.
Reminds me of “The Proper Use of Humility”. With artificial humility, a human pretends to have no strong beliefs, so they can socially display humility about any of their beliefs. In this case, with articifial consistency, a human pretends to have no difficult-to-achieve values, so they can socially display consistency between their values and behavior.
Acting according to your values is a virtue, but pretending to have no nontrivial values is cheating (or perhaps admitting to psychopathy if that really happens to be true).
It does not make sense to compare how much person X acts according to X-values with how much person Y acts according to Y-values (where X-values and Y-values are the professed values, not necessarily the ones truly felt). Those are two different scales.
I don’t think that’s true. I think this because I am pretty clearly not a psychopath (I’ve checked), and consciously decided to have no nontrivial moral values a year or two back. I had a mild anxiety disorder and was feeling constantly guilty, and as part of dealing with that I threw out all explicit moral codes.
I have more or less held to this standard; I do good things for people, when it is easy for me to do so (reputational benefits are real), or for people I like. I don’t commit crime because I don’t think the benefits are worth the risks; if I thought I had an excellent chance of getting away with it for a reward I found particularly appealing, and it wasn’t hurting anyone I liked, I probably would. In the non-iterated Prisoner’s Dilemma, I defect every time. In the iterated Dilemma, though, that’s just stupid.
A huge problem with seeing hypocrisy as a vice is that it’s prevents one from pointing out in polite company that the other person is hypocritical.
In general it would be good to have a culture where people can say: “Yes, I’m a bit of hypocrical about one issue. Yes, it would be better if I would walk my talk but the flesh is weak.”
This does not seem nearly as bad as the flip side, people preaching weak morals so as to not be seen failing them.
The danger of being accused of hypocrisy led me to embracing amoralism. If you have no principles, you can’t break them and you have no chance of hyporcrisy. It is the only honest option, don’t want to be hyprocritical after all.
I say you’re a hypocrite, pretending indifference between good and evil yet for the most part choosing good.
If the threat of the accusation of hypocrisy is what led you to embracing amoralism, then it sounds like your (a)morality is not well founded. A moral system is a construction, like any other ideology. Getting rid of the construction because someone may criticize how it looks is not good architecture.
You might say I am hypocritical?
Oh, you!
Is this a joke? I can’t tell.
Yes, it’s a joke.
Note: edited for grammar.
This is based on “hypocrisy” referring to not living up to one’s values, but I think that there is another sense of “hypocrisy” that refers to not merely acting contrary to one’s stated values, but asserting rights inconsistent with one’s stated values. Take, for instance, members of the KKK. They surely considered themselves to have a right to not be murdered, yet they asserted a right to murder others. Avoiding this sort of hypocrisy is a fundamental principle of morality: when a kid hits another kid, an adult will often ask something like “How would you feel if he did that to you?” If you’re not willing to give others the same rights as you give yourself, that’s a good clue that either you are assigning more rights to yourself then you are entitled, or you’re assigning less rights to others, or both.
Hypocrisy is only a vice for people with correct views. Consistently doing the Wrong Thing is not praiseworthy.
Unfortunately, it’s much easier to demonstrate inconsistency than incorrectness.
“In the course of my life, I have often had to eat my words, and I must confess that I have always found it a wholesome diet.”
Winston Churchill
(it’s not quite hypocrisy, but related)
The role of bodhisattva is hypocrisy as virtue. Nirvana is best for all, but a bodhisattva turns away from nirvana to help others go in. And as J. R. “Bob” Dobbs said, “I don’t practice what I preach because I’m not the kind of man I’m preaching to.”
As I understand it, a bodhisattva also enters niravana eventually, so I don’t see the hypocrisy.
Sort of. There’s usually taken to be an infinite number of beings a bodhisattva needs to save before leaving samsara; bodhisattvas aren’t supposed to leave anybody behind, and the buddhist cosmos is very very big.
The bodhisattvayana is more utilitarian than that. The goal is to maximize enlightenment; if avoiding final nirvana for yourself allows you to enlighten two others who wouldn’t have made it, you should avoid final nirvana.
A better example of lionized hypocrisy would be the idea of ‘skillful means’ (upaya) in Buddhism. Might be better translated ‘cheating as technique’, the idea that highly enlightened beings can and should violate ordinary moral norms for the greater good. Though that’s less about living with moral inconsistency and more about living with taboo tradeoffs between causes, I think.
Those who know me in person will know I regularly point out at least one of my own hypocrisies: specifically, that of eating meat. My hope is that it makes clear that I don’t endorse my own behavior on the matter, and that I’m generally “on Team Vegetarian” even though I eat meat (largely for flesh-is-weak reasons). I’ll even refer to meat as “animal suffering” in regular conversation, as in “I usually get the ‘Tarzan’ at Sandwich Spot, which is like the ‘Erica Cato’ but with animal suffering added” or “Spoonrocket’s veggie option today look edible, so I’ll pass on the one with animal suffering in it.”
I do hope the all-vegan supermarket Veganz opens in Berkeley soon; it seems like a natural first stop for such a thing in the USA. And if they do, I hope they’re available via Instacart. Then I would really have no excuse.
Hypocrisy, and lying in general, is a large part of social advantage as long as there is not a high social punishment enforced when detected by others. Defecting works, if you can get away with it, or if you’re not punished when found out. If there’s no real cost to being found out (or even a potential benefit, as you’ve marked yourself as an in group liar), the benefits outweigh the costs.
I think an honest group out competes a dishonest one, but IMO, in today’s world, a dishonest strategy out competes an honest one.
But the problem is that we’re not all the machines the comment makes us out to be. Switching on psychopathology is no more easy than “just eating less” to lose weight. Easier said than done. For many, being a lying weasel, or even just dealing with them, causes the bile to rise in the gorge, while for other it’s comes as naturally and comfortably as breathing.
The number of true psychopaths is low, I’m guessing. But the number who instinctively say whatever is socially advantageous is much higher. It’s not like they’re planning to lie and deceive, it’s that they’re interacting to their advantage. Correspondence to reality is simply irrelevant.
Thus the distinction between lies and bullshit.
Ha! Already got that book. But thanks, I’d forgotten about it. Looks like he has another book, this time “On Truth”.
Someone gave me a novel for my birthday, but I’m struggling to remember the title. Something like “The Secret Theories of Doctor Modesto”. Something like “Averageism”, where your personality just oozes to the mean of any group.
I was just discussing Hypocrisy and lying weasels with my sister. Maybe I should send her the book for her birthday.
From the link you provided:
Yes! Truth is irrelevant, advantage is all.
Found “On Bullshit” online.
https://athens.indymedia.org/local/webcast/uploads/frankfurt__harry_-_on_bullshit.pdf
It seems obvious to me that you won’t be able to follow your own moral code in the strictest way possible. Humans are fallible and minds are lazy. The point is not that you should always live up to your moral ideal, it’s that you should always try to do so.
We should allow for failure. If you want to name that sort of failure hypocrisy, I’m fine with that.
I don’t find this self-evident. In which meaning do you use the word “wrong”?
If some of the elements of a moral code contradict some of the other elements, at least one of them must be wrong.
First, inconsistency is not the same thing as contradiction. If my morals involve consulting a random-number generator at some point, the results will be inconsistent in the sense that I will behave differently in the same situation. That does not imply that some elements of my morals contradict other elements.
Second, I still don’t know what does “wrong” mean here.
I think you are confusing logical and behavioral consistency here. The OP meant inconsistent in the logical sense, while you are thinking of behavioral consistency. Another context for consistency is matter, where consistency refers to the viscosity of the material. In each case it refers to how resilient (or resistant to damage) something is.
I wouldn’t call that an inconsistency. Your morals would be “In [situation], do what RNG tells me” and not “In [situation], do X”. Both decision rules are consistent. I’m not sure we mean the same thing by “inconsistent moral code”—I’d say that an inconsistent moral code would have contradictions in it.
Consider if I said “All Xarbles are Yarbles, all Yables are Zarbles, but not all Xarbles are Zarbles”. You may have no idea what I’m talking about but you’d still be able to say that I’m wrong because I’m contradicting myself. Something similar is the case here.
What would be for you an example of inconsistent behavior, then?
If you climb the abstraction tree high enough, you can always get to consistency, if only in the form of “Do what your morals tell you to do”.
I don’t think so. Morals are not syllogisms. In particular, “X is wrong” is a different claim from “X is inconsistent” or “X is not logically coherent”.
If you say that eating meat is wrong, and you eat meat, then you are factually wrong about eating meat being morally wrong, you are acting morally wrongly when you eat meat, or both.
It’s not clear whether you are incorrect, immoral, or both. However, what you clearly are not doing is acting in a moral manner because it is moral. You can’t be doing that if you don’t know what’s moral, and you can’t be doing that if you’re acting immorally. You might get lucky and act morally by coincidence, but since that’s not something that can be done consistently, there’s little point in rewarding it.
If you say that eating meat is wrong, but then eat it.
That’s true, but “do what your morals tell you to do” is vacuous and not action-guiding. Morality must be action-guiding, and “In [situation], do X” and “In [situation], do what RNG tells you” are both action-guiding.
If I say “Eating meat is wrong, one should never do something wrong, it is sometimes permissible to eat meat”, there is a contradiction, and that requires at least one of the three statements to be false.
Lessee… You said
So for this situation the morals would be
if (coinflip == true) { say “Eating meat is wrong” } else { say “Eating meat is not wrong” }
Eat meat
I don’t really see the difference in that respect between “do what your morals tell you to” and “do what the RNG tells you to”.
Because “do what your morals tell you to do” is self-referential, as your morals are what you should do. “Do what your morals tell you to do” unpacks to “do what you should do”, so if someone asks you what you should do, you can only respond “What I should do”. “Do what the RNG tells you to do” is not self-referential.
It’s an interesting thought.
Hypocrisy involves (I think necessarily) deceit, which is why I think it is viewed as “wrong”. Most people I know are okay with the idea that living up to the ideals we hold is very difficult, and are okay with the idea when people talk a better game than they are actually living.
As an aside, then, if anyone is interested in the sort of thing Stephenson is possibly referring to, David Foster Wallace’s essay E. Unibus Pluram: Television and U.S. Fiction (1993, two years before The Diamond Age) is a classic. In DFW’s version, hypocrisy was the monarch of vices for a time, although discourse was not a matter of simply pointing it out (which still required the kind of positive statement untenable to a jaded relativist) so much as satirizing it. But that kind of irony was co-opted, leaving people not only unable to take a positive moral stand but now also ineffectual in the only critique remaining. He suggested a return to sincere, positive values:
I would expect anyone who genuinely believes that eating meat is wrong to not eat meat. If they eat meat while talking about how wrong it is, they believe something other than “Eating meat is wrong”, such as “I don’t want other people to eat meat”. Or perhaps they think that eating meat promotes suffering, and suffering is socially assigned the label “bad”, but they don’t actually think that the extent to which they contribute to it is bad.
If your high moral ideals are unappealing to you, then perhaps they’re incorrect and you shouldn’t abide by them. More generally, if you can’t live up to your own ideals, you should reexamine what you mean by “should” and “your ideals”.
Well, for me it is a habit (I was eating meat before I realized it is wrong) and convenience (society all the time provides me easy opportunities to pay other people to torture and kill animals for me and even not to think about it).
If I had a magic button that would rewrite my habits, I would push it, even if it meant that no one else would be influenced by it.
Seems like you are trying to redefine “wanting” to mean what people are actually doing. There are some problems with this, but it’s kinda difficult to explain the problem with that. Something about people not being automatically strategic, or not being utility maximizers...
It’s conceivable that you forget that eating meat is unethical every time you order or buy it, but that’s unlikely. If you really believed “I shouldn’t buy meat” when you had the opportunity to buy it, you wouldn’t buy it. (It doesn’t mean you wouldn’t find meat appealing, only that you wouldn’t buy it.)
People don’t always want to do what they should do. Sometimes they have habits that cause them to forget that they’re doing something wrong. Sometimes they’re inconsistent, and may not realize that they’re acting contrary to what they’d do if they were consistent. But people can’t simultaneously believe “I shouldn’t do X” and “I should do X”, as in “I shouldn’t eat meat” and “I should buy this chicken”.
I don’t think that’s what Viliam_Bur is saying.
Look at this problem that I encounter every day: I believe that I should learn how to program, and yet I end up playing Team Fortress. This can be because of laziness or akrasia or because I’ve spend my willpower for other things.
The same thing can happen for broader moral considerations. If you believe eating meat is wrong, you can still end up eating meat if you have a strong craving, don’t feel like arguing with the person cooking your meal or any other reason.
For perfect rationalists, acting in accordance with your moral values is easy and takes no effort. Unfortunately, this isn’t the case for humans.
I don’t think akrasia can apply to the area traditionally considered to be morality. If you believe doing something would be evil, that feels different from it being merely suboptimal and harmful to yourself. For example, you like playing TF2, even though it may be a suboptimal to play it at times, but even though it’s a habit, you’d instantly stop doing it if, say, the player avatars in TF2 were real beings that experienced terror, pain, and suffering in the course of gameplay. It stands to reason that eating meat would be the same.
I searched for “I want to be vegan but love meat” It was in google autocomplete and has plenty of results including this Yahoo answers page which explicitly mentions that the poster wants to be a vegetarian for ethical reasons.
I don’t think that’s a counterexample. If I had a billionaire uncle who willed me his fortune, I could say something like “I like money but I don’t want to commit murder”—and then I wouldn’t commit murder. Liking the taste of meat and still abstaining from it because you think eating it is evil is similar.
The point of it wasn’t to say that people like meat. The point was that people have or expect akrasia from not eating meat enough that they search Google and ask people on question sites for help.
I used to believe like you that if you believe something is morally good then you would do it. That axiom used to be a corner stone in my model of morality. There was actually a stage in my life where my moral superiority provided most of my self esteem and disobeying it was unthinkable. When I encountered belief in belief I couldn’t make sense of it at all. I was further confused that they didn’t admit it when I explained how they were being inconsistent.
But besides that I don’t think humans evolved to have that kind of consistency . I believe that humans act mostly according to reinforcement. Morality does provide a form of reinforcement in the sense that you feel good when you act morally and worse otherwise, however if there was a sufficient external motivator such as extreme torture then you would eventually give in, perhaps rationalizing the decision.
I would suggest the people who have commented here read this post if they haven’t yet because there have been two arguments over definitions here already (first with consistency and then the definition of “genuine belief”) and there is a reason that is frowned upon. You should also see Belief in belief for better understanding how people can act contrary to their stated morals and behave in contradictory ways. (It typically comes up a lot with religious people, who don’t try to be as moral as they can be despite viewing it as good)
I don’t believe that you believe you should learn how to program. I do believe you think it would be good for you to learn to program, but you also enjoy TF2 and choose to play it instead. If it happened occasionally, I’d believe that you could occasionally forget your real goals, but if it happens all the time, I question that your real goals are what you say they are.
This is even more the case with eating meat. If you genuinely believe that eating meat harms others and that’s morally relevant, it’s more than just the self-inflicted harm of not learning to program, it’s actually doing evil that hurts something. If you believe something is morally wrong, you will act accordingly—catching yourself in a state of “I’m in the habit of doing something evil” would be such a horrifying realization that it would instantly shatter the habit.
I think this is an instance of the near/far distinction. Morality is far. It’s what I espouse, what I consciously intend to do when the decision isn’t being made right now, and what I’d program an AI to enforce.
On the contrary, it is entirely possible to have a belief and not act on it. I believe that Singer’s arguments for vegetarianism are very strong. But I still eat meat because it is delicious and convenient.
I find utilitarian arguments fairly strong, and I know that at the very least several thousand more of the most frivolous dollars I spend every year could be allocated to altruistic causes with great net benefit by any utilitarian and many non-utilitarian ethical frameworks, yet I do not do so.
I know that exercise is good for my health, yet I partake too infrequently. That does not mean I fundamentally misunderstand the benefits of exercise, or misunderstand the value of health.
It is possible to say that something is wrong according to a certain moral framework (for example, utilitarianism) and not subscribe to that framework. If Singer makes strong utilitarian arguments against eating meat, but you eat meat because it’s delicious, it can be perfectly consistent if you’re not a utilitarian. You can agree with many utilitarian premises and conclusions and still not be a utilitarian.
Edit: If you consider utilitarianism to be correct, what do you mean by that?
I mean that I have no ethical basis for meat-eating. “Meat is delicious” is an argument from selfish hedonism, and I could not provide a credible philosophical justification.
If you’re familiar with the comedian Louis CK, the basis of most of his comedy is that he understands how to behave ethically, to respect his fellow human beings, to improve himself and the world around him, yet most of the time he persists in perversely defying his better impulses. Singer addresses the same topic : it is entirely possible to be unethical—the sky will not fall, the oceans will not boil, you will not be sent to hell. But you shouldn’t do it because it is unethical. But if you behave unethically, as all of us frequently do, the earth will keep on spinning.
I believe utilitarianism is, roughly, a correct framework for ethics (to qualify that, I believe that worrying over specifics of ethical frameworks is a rabbit-hole that you shouldn’t head down, since most ethical frameworks will correlate heavily in terms of ordinal rankings of actions actually available to you in regular life).
A selfishly hedonistic lifestyle is unethical by almost any standards, certainly none I subscribe to, yet that is essentially how I live (I believe that most people are mostly selfishly hedonistic most of the time; I am no exception).
I could tie myself in knots trying to excuse myself from charges of hypocrisy, but I think I, along with most people, essentially am a hypocrite w/r/t my declared values.
Louis CK, channeling Peter Singer: “My Life Is Really Evil”
“Selfish hedonism” is also an ethical system, though not a very popular one. You could say that meat gives you pleasure and that ethically justifies eating it, even though it causes some suffering.
I agree that it’s possible to be unethical, but I don’t believe that it’s possible to believe that you’re doing something unethical while you’re doing it, not if you believe that you actually believe that you shouldn’t do it. (On the other hand, it’s perfectly possible to think “This is what society in general or a particular ethical system labels as unethical, but I don’t agree with it.”)
If you believe utilitarianism to be correct but don’t always act as a utilitarian would, what do you mean when you say that you believe that utilitarianism is correct? One possibility is that you forget that utilitarianism is correct every time you have the opportunity to buy or eat meat, but this seems unlikely. Another possibility is that you forget that meat-eating is bad from a utilitarian perspective when you have an opportunity to eat meat, but this is also unlikely. So what do you mean by “utilitarianism is… correct”?
Really?
Really. To unpack that statement, “unethical” = “what one shouldn’t do”. If you’re choosing to do something, you think you should do it, so you obviously can’t be thinking that you shouldn’t do it.
On the other hand, if “unethical” means “what one shouldn’t do, according to X”, one can certainly do something they consider to be unethical. This second definition is also a common one.
Confusion between the two different meanings are at the root of much disagreement about ethics.
Here are a few related but different questions.
“What do I feel most inclined to do right now?”
“What do I, on reflection, think it would be best to do right now?”
“What do I, on reflection, think it would be best to do right now *if I tried to suppress my natural tendencies to be more concerned for myself than others, more concerned for those close to me than those further away, etc.?”
If you define “X thinks s/he should do Y” in terms of X’s answer to question 1 (or some slight variant worded to ensure that it always matches what X is actually doing) then, indeed, no one ever does anything they think they “shouldn’t”. But I see no reason at all to think that this sense of “should” has anything much to do with what’s usually called ethics, or indeed with anything else of much interest to anyone other than maybe X’s psychiatrist. Our actions are driven not only by our stable long-term values but also by any number of temporary whims, some of them frankly crazy.
If you define “X thinks s/he should do Y” in terms of X’s answer to question 2 or 3, then you can make a case that “should” is now something to do with ethics (especially for question 3, but maybe also for question 2) -- but now it’s not at all true that a person’s actions always match what they “think they should do”. I frequently do things that, on the whole, I think I shouldn’t do. Often while actually thinking, in so many words, “I really shouldn’t be doing this.”
And all this is true whether X is thinking about what-it-would-be-best-to-do explicitly in terms of “best in such-and-such a system of values”, or taking “best” as having an “absolute” meaning somehow.
I am not defining “X thinks they should do Y” in terms of 1, but in terms of 2. People can certainly feel inclined to do things they shouldn’t do. But if you force them into a reflective mode and they still act as they did before, it tells you about what they really believe. If it’s a failure of self-control due to habits/forgetfulness, that I can understand. But in the case of reluctant meat-eaters, it seems to be something more than that—they claim to not want to eat meat, but if you don’t want to eat meat, it’s easy not to—just don’t buy it and then you won’t have any meat to eat. Sometimes people buy things they wouldn’t reflectively want, but that’s when they’re buying something they’d view as harmful to the self (or just suboptimal), and not in the general category of “evil”. No one can simultaneously reflectively think “I shouldn’t do this (because it’s evil)” and “I should do this (evil) thing”. The only possibility is that for reluctant meat-eaters, meat is an impulse buy, but that seems unlikely.
I suspect you’re using two different meanings of “should” here.
OK, so either now you’re making a weaker claim than the one you started out with (“I don’t believe that it’s possible to believe that you’re doing something unethical while you’re doing it”) or I misunderstood what you meant before. Because people frequently aren’t in “a reflective mode”. (And I don’t think believing something’s unethical requires being in a reflective mode.)
But you still haven’t moved far enough for me to agree (not that there’s any particular reason you should care about that). I think I have frequently had the experience of reflecting that I really don’t want to be doing X, while doing X. It’s not that I’m not in reflective mode, it’s that the bit of me that’s in reflective mode doesn’t have overall control.
This is all a separate matter, by the way, from the question of how to use terms like “should”, “ethical”, etc., in the face of the fact that we (almost) all care much more about ourselves than about distant others, and that many of us hold that in some sense we shouldn’t. I appreciate that you wish to use those terms to refer to a person’s “overall” values as (maybe inexactly) shown by their actions, rather than to their theoretical beliefs about what morally perfect agents would do. I’m not sure I agree, but that isn’t what I’m disagreeing with here.
What meanings, and where do you think I’m using each?
I suspect our inferential distance may be too high for agreement at this time. But, to clarify on one point
You said “I frequently do things that, on the whole, I think I shouldn’t do. Often while actually thinking, in so many words, ‘I really shouldn’t be doing this.‘”. This is a plausible rephrasing of “I frequently do things that I generally disapprove of and perhaps would prefer if people in general wouldn’t do them, also I may sometimes feel guilty about doing things I disapprove of, especially if they’re generally socially disapproved of in my culture, subculture, or social group. When I do these things, I think the words ‘I shouldn’t do this’, by which I don’t literally mean that I shouldn’t do this, but that doing this is ‘boo!’/‘ugh’/low-status/seems to conflict with things I approve of/would not happen in a world I’d prefer to live in.”
Oh. Would you care to say more?
So, your proposed expansion of my second “should”: (1) on what grounds do you think it likely that I mean that, and (2) is it actually different from your proposed expansion of the first? (“Seems to conflict with things I approve of” and “would not happen in a world I’d prefer to live in” are not far from “things that I generally disapprove of” and “perhaps would prefer if people in general wouldn’t do them”, respectively.)
It seems a little curious to me that your proposed expansion of my second “should” offers, in fact, not one possible meaning but five (though I’m not sure there’s a very clear distinction between “boo!” and “ugh” here). It seems to me that this weakens your point—as if you’re sure I must mean something other than what I say, but you have no real idea what.
In fact, despite your dismissive references to social status in what you say, I can’t help suspecting that you’re trying to pull a sort of status move here: when blacktrance says “should” s/he really means “should”, but when gjm says “should” he means “hooray!” or “high-status” or something—anything! -- with a little touch of intellectual dishonesty about it.
Well, you might be right. But let’s see some evidence, if so.
This wouldn’t be the first time I’ve run into inferential distances when discussing ethics on LW, and I suspect it to be the case here, perhaps in part due to differences in terminology, in part due to unstated background assumptions.
I don’t know if you in particular mean that, but it’s a common usage I’ve noticed among people who do things that they say they shouldn’t do.
I think I rambled a little too much in my expansion, so to compress it into something more compact: “I occasionally do things I and/or people whose opinions I care about label as ‘morally bad’, and when I do these things, I think the words ‘I shouldn’t do this’. In part I’ve internalized that doing this thing is ‘bad’, but I don’t actually think it’s bad, and I still choose to do it.” To further clarify, when people say “I shouldn’t do X”, they mean that it feels like an external imposition for them, and if they could do what they wanted, they’d cast it aside and do X, and only the desire to be moral (perhaps motivated by guilt, shame, or adherence to social norms) is keeping them from doing it. There is another sense of “I shouldn’t do X”, as in “I shouldn’t put my hand on a hot stove”—there’s no external imposition there, motivation is entirely internal. Both meanings of “should” are common, and perhaps I am wrong to say that only the second, internal meaning of “should” is valid.
If one thinks that one externally-shouldn’t eat meat, they may still eat meat because they don’t think they internally-shouldn’t eat meat. I forgot (due to inferential differences) that belief that morality is external is common (a belief I do not share), and in that case it’s certainly possible to believe you’re acting unethically and still consistently want to eat meat.
Yes, we seem to be having terminology problems.
For the record, let me briefly define the words I’m using.
Morality (=morals) is a system of values along with the importance (=weight) that people attach to them. In most real-life situations any course of action will conflict with some values so decision-making is an exercise in balancing values and deciding on acceptable trade-offs.
Ethics is a collection of action guidelines driven by the morals. Because most decisions are trade-offs, it’s common for actions to match some ethical guidelines and not match other ones.
Generally speaking, our conscious mind does the balancing act and comes up with a “what should I do” decision, but the unconscious mind does its own calculation and may come with a another decision. If the decisions are different you have the usual problems under the umbrella of hypocrisy, guilty conscience, etc.
People usually speak of morals and ethics meaning the calculations done by the conscious mind. So it’s perfectly possible for one to think “I should not eat that pint of ice cream” while gobbling it up. The mind is not a single agent.
I usually use the terms “morality” and “ethics” interchangeably, and in the sense in which “X is moral” and “one should do X” are synonymous.
The extent to which you attribute differences in beliefs and behavior seems unrealistic.. Certainly, people sometimes fall into habits, aren’t mindful, forget what they’re doing, etc, but it seems implausible that it would lead to such wide disparities between what your conscious mind thinks you should do and what you actually do. It would mean that if I were to remind someone who professes that eating meat is wrong of their belief while they’re reaching for a piece of steak at the store, they’d consider what they’re doing and choose to not buy the steak. While this may be the case some of the time, it would have to happen much more often than it actually does.
What really goes on, I think for most people and certainly myself, is compartmentalization. I understand certain things to be ethical and others to be unethical, and when it comes time to make a decision (eating meat, for instance) that question is entirely neglected, or skimmed over.
Now, clearly animal suffering is something I don’t really care about. But that doesn’t mean I have any argument or foundation for believing that it is legitimately unimportant. I think this is much truer for an issue I care more about (but not enough to act fully ethically), poverty and altruism. I know that people across the world are impovershed and could benefit from my altruism more than I will benefit from something frivolous and overpriced I might buy instead. But I may still buy the frivolous thing at times.
And all but the most committed people will behave this way most of the time; they will not even earnestly try to behave ethically, but instead behave conveniently.
Yes, these are both unlikely, but replace “forget” with “habitually conspire with myself to forget/ignore/brush off”.
Think of it this way: Whether someone sticks to a diet (for heatlh, let’s say, and not vegetarianism) or not is partly a matter of belief in the importance of the diet, but it is also partly a matter of habit, convenience, impulse and opportunity. The same is true for when we follow our ethical beliefs.
Compartmentalization does make it sound that you forget that eating meat is unethical when it’s decision time.
Do you need an argument for believing it’s legitimately unimportant? Why not just say that it’s an arbitrary taste? The same goes for altruism—other people may benefit more from your money than you do, but, you don’t care nearly as much about them as you care about yourself. Utilitarianism says that’s wrong, but why should you think that utilitarianism is correct?
As for diets, when someone develops habits that maintain a diet, it’s because they believe that diet to be correct.
You are right that tastes are a deciding factor, but you’re taking it too far. According to you it impossible to act unethically, and/or your personal ethics must be consistently determined by your actions. I can essentially behave entirely arbitrarily and to you I will be obeying my own true code of ethics.
A big part of what this site addresses is how humans are inconsistent, irrational, and self-deceiving and short-termist. Can we at least agree that there are moments when people take actions that are more inconsistent, irrational and self-deceiving and moments when their actions are better-harmonized with their stated/aspirational goals and beliefs?
And can we agree that if I believe, as most reasonable people to, that irrational anger is bad, yet I flip someone off in a bout of road rage, it’s possible I’m failing to live up to a consistent set of beliefs which I legitimately care about, rather than my stated beliefs being a veneer over my true, sometimes-road-raging beliefs?
And if you’ve ever been on a diet or known people on a diet, you know that circumstance and external factors (say, trainer or family support, distance to the nearest grocery store vs. nearest fast food place) make a huge difference on adherence, even when there’s no clear tie between those things and how correct the person believes the diet to be?
Not at all. It’s certainly possible to act unethically, such as if you’re inconsistent, or if you have mistaken beliefs about what’s ethical. What you can’t do is intentionally do something while consciously thinking that it’s unethical. For example, you can’t think “I’ll torture these children, even though torturing children is wrong”—not if you believe that torturing children is wrong.
It is true that people are sometimes inconsistent because sometimes they act according to their habits instead of deliberately, or because strong emotions overwhelm them and they forget to do what they believe to be correct. But if that were the main explanation for why people don’t always do what they believe to be right, I would expect people to have the feeling of “Oops, I forgot! and messed up” more often than they seem to. Instead, something like “It’s wrong, but I’m going to do it anyway” seems to be more common, which implies that they don’t really think it’s wrong.
Minds are modular. A part of your mind could believe that was something was wrong, while another didn’t care and just wanted to do it.
Calling actions ‘right’ or ‘wrong’ confuses the issue, because it assumes an absolute scale of value. In many systems, actions are of (positive or negative) value to someone, and of different value to someone else. So eating a chicken is certainly bad for the chicken, but it may be good for the eater, and then you need to weight the two things against each other.
It’s perfectly consistent for me to believe that eating meat is ‘wrong’ in the sense of being harmful to the animal being eaten, and yet I do eat meat because the value to myself outweighs that, so it’s ‘right for me’.
“Right” and “wrong” mean something more than “bad for X and good for Y”, they are normative, “wrong” meaning “what one ought not do”. So if I believe that it’s wrong to eat meat, I am saying something more than eating meat is bad for the chicken, I mean that I should not eat meat.