The Terrible, Horrible, No Good, Very Bad Truth About Morality and What To Do About It
Joshua Greene has a PhD thesis called The Terrible, Horrible, No Good, Very Bad Truth About Morality and What To Do About It. What is this terrible truth? The essence of this truth is that many, many people (probably most people) believe that their particular moral (and axiological) views on the world are objectively true—for example that anyone who disagrees with the statement “black people have the same value as any other human beings” has committed either an error of logic or has got some empirical fact wrong, in the same way that people who claim that the earth was created 6000 years ago are objectively wrong.
To put it another way, Greene’s contention is that our entire way of talking about ethics—the very words that we use—force us into talking complete nonsense (often in a very angry way) about ethics. As a simple example, consider the use of the words in any standard ethical debate—“abortion is murder”, “animal suffering is just as bad as human suffering”—these terms seem to refer to objective facts; “abortion is murder” sounds rather like “water is a solvent!”. I urge readers of Less Wrong to put in the effort of reading a significant part of Greene’s long thesis starting at chapter 3: Moral Psychology and Projective Error, considering the massively important repercussions he claims his ideas could have:
In this essay I argue that ordinary moral thought and language is, while very natural, highly counterproductive and that as a result we would be wise to change the way we think and talk about moral matters. First, I argue on metaphysical grounds against moral realism, the view according to which there are first order moral truths. Second, I draw on principles of moral psychology, cognitive science, and evolutionary theory to explain why moral realism appears to be true even though it is not. I then argue, based on the picture of moral psychology developed herein, that realist moral language and thought promotes misunderstanding and exacerbates conflict. I consider a number of standard views concerning the practical implications of moral anti-realism and reject them. I then sketch and defend a set of alternative revisionist proposals for improving moral discourse, chief among them the elimination of realist moral language, especially deontological language, and the promotion of an anti-realist utilitarian framework for discussing moral issues of public concern. I emphasize the importance of revising our moral practices, suggesting that our entrenched modes of moral thought may be responsible for our failure to solve a number of global social problems.
As an accessible entry point, I have decided to summarize what I consider to be Greene’s most important points in this post. I hope he doesn’t mind—I feel that spreading this message is sufficiently urgent to justify reproducing large chunks of his dissertation—Starting at page 142:
In the previous chapter we concluded, in spite of common sense, that moral realism is false. This raises an important question: How is it that so many people are mistaken about the nature of morality? To become comfortable with the fact that moral realism is false we need to understand how moral realism can be so wrong but feel so right. …
The central tenet of projectivism is that the moral properties we find (or think we find) in things in the world (e.g. moral wrongness) are mind-dependent in a way that other properties, those that we’ve called “value-neutral” (e.g. solubility in water), are not. Whether or not something is soluble in water has nothing to do with human psychology. But, say projectivists, whether or not something is wrong (or “wrong”) has everything to do with human psychology.…
Projectivists maintain that our encounters with the moral world are, at the very least, somewhat misleading. Projected properties tend to strike us as unprojected. They appear to be really “out there,” in a way that they, unlike typical value neutral properties, are not. …
The respective roles of intuition and reasoning are illuminated by considering people’s reactions to the following story:
“Julie and Mark are brother and sister. They are travelling together in France on summer vacation from college. One night they are staying alone in a cabin near the beach. They decided that it would be interesting and fun if they tried making love. At the very least it would be a new experience for each of them. Julie was already taking birth control pills, but Mark uses a condom too, just to be safe. They both enjoy making love but decide not to do it again. They keep that night as a special secret between them, which makes them feel even closer to each other. What do you think about that, was it OK for them to make love?”
Haidt (2001, pg. 814) describes people’s responses to this story as follows: Most people who hear the above story immediately say that it was wrong for the siblings to make love, and they then set about searching for reasons. They point out the dangers of inbreeding, only to remember that Julie and Mark used two forms of birth control. They next try to argue that Julie and Mark could be hurt, even though the story makes it clear that no harm befell them. Eventually many people say something like
“I don’t know, I can’t explain it, I just know it’s wrong.”
This moral question is carefully designed to short-circuit the most common reason people give for judging an action to be wrong, namely harm to self or others, and in so doing it reveals something about moral psychology, at least as it operates in cases such at these. People’s moral judgments in response to the above story tend to be forceful, immediate, and produced by an unconscious process (intuition) rather than through the deliberate and effortful application of moral principles (reasoning). When asked to explain why they judged as they did, subjects typically gave reasons. Upon recognizing the flaws in those reasons, subjects typically stood by their judgments all the same, suggesting that the reasons they gave after the fact in support their judgments had little to do with the process that produced those judgments. Under ordinary circumstances reasoning comes into play after the judgment has already been reached in order to find rational support for the preordained judgment. When faced with a social demand for a verbal justification, one becomes a lawyer trying to build a case rather than a judge searching for the truth. …
The Illusion of Rationalist Psychology (p. 197)
In Sections 3.2-3.4 I developed an explanation for why moral realism appears to be true, an explanation featuring the Humean notion of projectivism according to which we intuitively see various things in the world as possessing moral properties that they do not actually have. This explains why we tend to be realists, but it doesn’t explain, and to some extent is at odds with, the following curious fact. The social intuitionist model is counterintuitive. People tend to believe that moral judgments are produced by reasoning even though this is not the case. Why do people make this mistake? Consider, once again, the case of Mark and Julie, the siblings who decided to have sex. Many subjects, when asked to explain why Mark and Julie’s behavior is wrong, engaged in “moral dumbfounding,” bumbling efforts to supply reasons for their intuitive judgments. This need not have been so. It might have turned out that all the subjects said things like this right off the bat:
“Why do I say it’s wrong? Because it’s clearly just wrong. Isn’t that plain to see? It’s as if you’re putting a lemon in front of me and asking me why I say it’s yellow. What more is there to say?”
Perhaps some subjects did respond like this, but most did not. Instead, subjects typically felt the need to portray their responses as products of reasoning, even though they generally discovered (often with some embarrassment) that they could not easily supply adequate reasons for their judgments. On many occasions I’ve asked people to explain why they say that it’s okay to turn the trolley onto the other tracks but not okay to push someone in front of the trolley. Rarely do they begin by saying, “I don’t know why. I just have an intuition that tells me that it is.” Rather, they tend to start by spinning the sorts of theories that ethicists have devised, theories that are nevertheless notoriously difficult to defend. In my experience, it is only after a bit of moral dumbfounding that people are willing to confess that their judgments were made intuitively.
Why do people insist on giving reasons in support of judgments that were made with great confidence in the absence of reasons? I suspect it has something to do with the custom complexes in which we Westerners have been immersed since childhood. We live in a reason-giving culture. Western individuals are expected to choose their own way, and to do so for good reason. American children, for example, learn about the rational design of their public institutions; the all important “checks and balances” between the branches of government, the judicial system according to which accused individuals have a right to a trial during which they can, if they wish, plead their cases in a rational way, inevitably with the help of a legal expert whose job it is to make persuasive legal arguments, etc. Westerners learn about doctors who make diagnoses and scientists who, by means of experimentation, unlock nature’s secrets. Reasoning isn’t the only game in town, of course. The American Declaration of Independence famously declares “these truths to be self-evident,” but American children are nevertheless given numerous reasons for the decisions of their nation’s founding fathers, for example, the evils of absolute monarchy and the injustice of “taxation without representation.” When Western countries win wars they draft peace treaties explaining why they, and not their vanquished foes, were in the right and set up special courts to try their enemies in a way that makes it clear to all that they punish only with good reason. Those seeking public office make speeches explaining why they should be elected, sometimes as parts of organized debates. Some people are better at reasoning than others, but everyone knows that the best people are the ones who, when asked, can explain why they said what they said and did what they did.
With this in mind, we can imagine what might go on when a Westerner makes a typical moral judgment and is then asked to explain why he said what he said or how he arrived at that conclusion. The question is posed, and he responds intuitively. As suggested above, such intuitive responses tend to present themselves as perceptual. The subject is perhaps aware of his “gut reaction,” but he doesn’t take himself to have merely had a gut reaction. Rather, he takes himself to have detected a moral property out in the world, say, the inherent wrongness in Mark and Julie’s incestuous behavior or in shoving someone in front of a moving train. The subject is then asked to explain how he arrived at his judgment. He could say, “I don’t know. I answered intuitively,” and this answer would be the most accurate answer for nearly everyone. But this is not the answer he gives because he knows after a lifetime of living in Western culture that “I don’t know how I reached that conclusion. I just did. But I’m sure it’s right,” doesn’t sound like a very good answer. So, instead, he asks himself, “What would be a good reason for reaching this conclusion?” And then, drawing on his rich experience with reason-giving and -receiving, he says something that sounds plausible both as a causal explanation of and justification for his judgment: “It’s wrong because their children could turn out to have all kinds of diseases,” or, “Well, in the first case the other guy is, like, already involved, but in the case where you go ahead and push the guy he’s just there minding his own business.” People’s confidence that their judgments are objectively correct combined with the pressure to give a “good answer” leads people to produce these sorts of post-hoc explanations/justifications. Such explanations need not be the results of deliberate attempts at deception. The individuals who offer them may themselves believe that the reasons they’ve given after the fact were really their reasons all along, what they “really had in mind” in giving those quick responses. …
My guess is that even among philosophers particular moral judgments are made first and reasoned out later. In my experience, philosophers are often well aware of the fact that their moral judgments are the results of intuition. As noted above, it’s commonplace among ethicists to think of their moral theories as attempts to organize pre-existing moral intuitions. The mistake philosophers tend to make is in accepting rationalism proper, the view that our moral intuitions (assumed to be roughly correct) must be ultimately justified by some sort of rational theory that we’ve yet to discover. For example, philosophers are as likely as anyone to think that there must be “some good reason” for why it’s okay to turn the trolley onto the other set of tracks but not okay to push the person in front of the trolley, where a “good reason,” or course, is a piece of moral theory with justificatory force and not a piece of psychological description concerning patterns in people’s emotional responses.
One might well ask: why does any of this indicate that moral propositions have no rational justification? The arguments presented here show fairly conclusively that our moral judgements are instinctive, subconscious, evolved features. Evolution gave them to us. But readers of Eliezer’s material on Overcoming Bias will be well aware of the character of evolved solutions: they’re guaranteed to be a mess. Why should evolution have happened to have given us exactly those moral instincts that give the same conclusions as would have been produced by (say) great moral principle X? (X = the golden rule, or X = hedonistic utilitarianism, or X = negative utilitarianism, etc).
Expecting evolved moral instincts to conform exactly to some simple unifying principle is like expecting the orbits of the planets to be in the same proportion as the first 9 prime numbers or something. That which is produced by a complex, messy, random process is unlikely to have some low complexity description.
Now I can imagine a “from first principles” argument producing an objective morality that has some simple description—I can imagine starting from only simple facts about agenthood and deriving Kant’s Golden Rule as the one objective moral truth. But I cannot seriosuly entertain the prospect of a “from first principles” argument producing the human moral mess. No way. It was this observation that finally convinced me to abandon my various attempts at objective ethics.
- Shut Up And Guess by 21 Jul 2009 4:04 UTC; 125 points) (
- 28 Jul 2010 22:38 UTC; 65 points) 's comment on Open Thread: July 2010, Part 2 by (
- Two arguments for not thinking about ethics (too much) by 27 Mar 2014 14:15 UTC; 61 points) (
- Typical Mind and Politics by 12 Jun 2009 12:28 UTC; 60 points) (
- Unspeakable Morality by 4 Aug 2009 5:57 UTC; 33 points) (
- Morality as Parfitian-filtered Decision Theory? by 30 Aug 2010 21:37 UTC; 32 points) (
- 20 Jul 2009 18:54 UTC; 17 points) 's comment on Being saner about gender and rationality by (
- 1. A Sense of Fairness: Deconfusing Ethics by 17 Nov 2023 20:55 UTC; 16 points) (
- 12 Jun 2009 20:43 UTC; 10 points) 's comment on Typical Mind and Politics by (
- Post hoc justifications as Compression Algorithm by 3 Jul 2022 5:02 UTC; 8 points) (
- 3 Mar 2010 3:09 UTC; 5 points) 's comment on Open Thread: March 2010 by (
- 27 Apr 2011 22:01 UTC; 5 points) 's comment on What is Metaethics? by (
- 10 Jun 2010 6:53 UTC; 3 points) 's comment on Open Thread June 2010, Part 2 by (
- 6 Dec 2023 5:52 UTC; 3 points) 's comment on Taking Into Account Sentient Non-Humans in AI Ambitious Value Learning: Sentientist Coherent Extrapolated Volition by (
I agree with most of these excerpts, but I’d like to see evidence for the claim that Western culture is the main cause of our tendency to rationalize post hoc arguments for our moral intuitions. I suspect that much of it is an innate human tendency, and that Western culture just mediates which rationalizations are considered persuasive to others.
If they ran some version of the incest thought experiment in non-Western societies, that is, I predict you could get the same ‘moral dumbfounding’ effect; you’d just have to construct the scenario in a way that negates that culture’s standard rationalizations.
Agree and furthermore suggest that this goes beyond morality itself: people make fast perceptual judgments that proceed directly from salient features to categories to inferred characteristics. Brother-sister love → “incest” → “wrong” in the same way that human shape → “human” → “mortal”. The moral judgment is just one more inferred characteristic from the central category.
Minor point: I find Julie-and-Mark-like examples silly because they ask for a moral intuition about a case where the outcome is predefined. Our moral intuition makes arguments of the form “behavior X usually leads to a bad outcome, therefore X is wrong”. So if the outcome is already specified, the intuition has nothing to say; nor would we expect it to, since the whole point of morality is to help you make decisions between live possibilities, so why should it have anything to say about a situation that has already happened/cannot be altered?
Or to put it another way, I’m surprised no one said something to the effect of “Julie and Mark shouldn’t have had sex because at the time they did they had no way of knowing that it would turn out well, and in fact every reason to believe it would turn out very badly, based on the experiences of other incestuous siblings.”
For concreteness, imagine a different story where Julie and Mark decide to play Russian roulette in their cabin (again, just for fun). They both miss the bullet, no harm results, and they never tell anyone etc. etc. So what was wrong with their actions?
I think most people would be able to handle that one very quickly. So the really interesting question is why no-one comes up with such an explanation in the incest case.
An interesting analogy. I mean, who would predict something crazy like the square of the orbital period being proportional to the cube of the orbital radius?
Obviously there’s no unifying principle in all that messy moral randomness. No hidden laws, just waiting to be discovered...
I think the whole point is that our moral intuition doesn’t make arguments—we have them, and then we come up with rationalizations ex post facto.
But empirically, intuition really did prompt lots of people to classify the incest as a moral transgression.
Right, I phrased that very badly. What I was trying to say is that the moral intuition was trained (evolutionarily or whatever) to map from behaviors to right/wrongness based on the (weighted) set of possible outcomes. So when we’re given a behavior, the intuition spits out a right/wrong decision based on what was likely to have happened, not considering what was stipulated in the problem to have actually happened.
See, and I was going to write that your second paragraph was more insightful and didn’t really follow from your first paragraph. I was going to say that it seemed like that was what moral intuition was actually calculating inaccessibly, so it is indeed interesting that (as far as Greene reports) nobody does come out with it as a rationalization. But then I held off, because I thought that I was just projecting my own thought process onto your words, and you might have meant something more in line with your first paragraph by them.
One thing i found a bit dodgy about that example is that it just asserts that the outcomes were positive.
I would bet that, for the respondents, simply being told that the outcomes were positive would still have left them feeling that in a real brother-sister situation like that there would likely have likely been some negative consequences.
Greene does not seem to factor this into account when he interprets their responses.
Agreed. Morality is for determining what one has most reason to do or want. Clearly asking after the fact “did they do the wrong thing?” doesn’t mesh well with what morality is for. But the finger-wagging sorts of moralists might not agree.
Who says what morality is for? People have moral instincts which are used, often than not, to evaluate already finished actions on good-bad scale. People engaged in actions evaluated as wrong tend to be labeled as bad people in consequence. We encounter this use of morality every day. Maybe you claim that morality should be used differently, but that’s your (meta-)moral judgement (prescriptive statement), while the original post and the thesis it refered to were descriptive about morality (and I think accurate).
True, although finger-waggers do say things like “Well sure, it might have turned out okay this time. But that doesn’t mean it was a good idea.”
Ugh. Where to start...
Yes, because evolution gave us the instincts that solved the prisoner’s dilemma and made social life possible. Which is why Jonathan Haidt finds it more helpful to define morality as, rather than being about harm and fairness, something like:
Green is basically screaming bloody murder at how people stupidly conclude that incest is wrong in a case where some bad attributes of incest don’t apply, and how this is part of a more general flaw involving people doing an end-run around the usual need to find rational reasons for their moral judgments.
His view is in complete ignorance of recent ground-breaking research on the nature of human morality (see above link). Basically, most secular academics think of morality only in terms of harms and fairness, but, worldwide, people judge morality on three other dimensions as well: ingroup/loyalty (do we maintain a cohesive group?), authority/respect, and purity/sancity (the last one being the intuition challenged by Greene’s example).
While political discourse in the West has focused on harm and fairness, human nature in general is judging from all five. This narrow focus has resulted in Westerners, not surprisingly, being unable to justify from the three others unless they come from a … religious background!
Or, more succinctly, morality is a meme that enables solutions to the prisoner’s dilemma. All five dimensions, to some extent, work toward that end.
What Greene has discovered is better described as “Westerners do not have the educational background to justify and express their moral intuitions that go beyond harm and fairness.” Congratulations: when you force people to talk about morality purely in terms of harms, you can get them to voice moral opinions they can’t justify.
Had the participants gotten such grounding, they could have answered the incest dilemma like this:
“As stipulated, there is no harm from what the siblings did. However, that’s just disgusting [sanctity], and is disruptive to the social order [authority]. Within your artificial scenario, you have assumed these difficulties away. If I and others did not find such acts disgusting, they would inevitably become more common, and social life would break down: first, from genetic disease, and second, from destabilized family units, where parents are forced to take sides between their own kids. Over time, this hurts society’s ability to solve the prisoner’s dilemma.”
The fact that people cannot connect their moral intuitions to the evolutionary/historical reason that such intuitions evolved is not a reason to come to the conclusion Greene does.
I should also add that it starts off with a very questionable claim:
If you’re correctly paraphrasing Greene, this is misleading at best. Yes, those statements are syntactically similar, but most people are capable of recognizing when a statement starts to make a moral claim (or, upon further questioning, some concept they hold that is isomorphic to morality). They recognize that when you get into talk about something being “just as bad” as something else, you’re talking about morality.
It’s like saying, hey, “Jews are murderworthy” sounds rather like “apples are red”, OBVIOUSLY we are ill-equipped to discuss morality!
FWIW, I don’t even necessarily disagree with Greene that people approach morality from a flawed framework. But his arguments aren’t very good, they ignore the literature, and don’t present the right framework. Thumbs down.
Greene and Haidt have coauthored papers together, so I would guess they are aware of each other’s work!
Silas doesn’t seem to have noticed this…
Sorry for the self-reply, but to expand on my point about the difficulty Westerners have talking about certain dimensions of morality, I want to present an illustrative example from a different perspective.
Let’s say we’re in an alternate world with strong, codified rules about social status and authority, but weak, vague, unspoken norms against harm that nevertheless keep harm at a low level.
Then let’s say you present the people of this world with this “dilemma” to make Greene’s point:
Then, because of the deficiency in the vocabulary of “harms”, you would get responses like:
“Look, I can’t explain why, but obviously, it’s wrong to torture and kill someone for enjoyment. No disrespect to the President, of course.”
“What? I don’t get it. Why would the President order a citizen killed? There would be outrage. He’d feel so much guilt that it wouldn’t even relieve the stress you claim it does.”
“Yeah, I agree the President has authority to do that, but God, it just burns me up to think about someone getting tortured like that for someone else’s enjoyment, even if it is our great President.”
Would you draw the same conclusion Greene does about these responses?
For the reasons I pointed out here, it still seems to me that you’re attacking a straw man here. Greene doesn’t conclude from this that morality is not rationally justifiable. He believes that moral realism is false for separate reasons, which are set out at length in Ch. 2 of the dissertation.
AFAICT, the position you’re attacking has only been articulated by Roko.
I do not think it is a strawman that, in the alternate world, Greene would get a good laugh at how people cling so tightly to their anti-torture/murder intuitions, even when the President orders it for heaven’s sake! How strange that “one becomes a lawyer trying to build a case rather than a judge searching for the truth”.
I’m confused. You initially seemed to be criticizing Greene for attempting to conclude, from individuals’ responses to the dilemmas, that morality is not justifiable. I pointed out that Greene was not attempting to draw this conclusion from those data. You now say that your original argument is not a strawman because Greene would “get a good laugh” out of your alternative dilemma.
I would imagine that he might get a good laugh from this situation. After all, being an anti-realist he doesn’t think there are any good reasons for moral judgments; and he might therefore find any circumstance of moral dumbfounding amusing. But I don’t see how that’s especially relevant to the argument.
Whatever one great principle you think human morality flows from, there will be plenty of cases that violate it. Sure, we have some intuitions about minimizing total harm, but there are plenty of people who would have an instinctive moral aversion to many actions that minimize total harm. Many people are actually opposed to torture.
Timeout. I did not claim there was a great principle human morality flows from; I merely think there is more regularity to our intuitions (“godshatter”) than you or Greene would lead one to believe, and much of this is accounted for by that fact that instincts must have arisen that permitted social life, cooperation, accumulation of social capital, etc.. But yes, intuitions are going to contradict; I never said or implied otherwise.
Of course. Minimizing total harm is just one factor. So sure, there are cases where people believe that the torture is worse than whatever it is alleged to prevent. This is not the same, of course, as “not valuing reduction of total harm”.
Then it is a matter of degree: there is a lot of regularity to our moral instincts. But to attack the idea that there are objective moral truths out there it is enough to see that our intuitions do not fit any one great moral principle.
I suppose you could extract one particular regularity—such as “minmize total harm” and call that the objective moral truth. But then someone else will extract some other, distinct regularity such as “never use a person merely as a means” (the golden rule) and call that the objective moral truth. And they contradict each other.
You seem to be confusing Greene’s argument with Roko’s gloss on Greene’s argument (which is not to say your criticisms aren’t valid, they’re just not criticisms of Greene.)
AFAICT, the quoted passages from Greene aren’t intended (by Greene) to defend the notion “that moral propositions have no rational justification.” His argument for that proposition (i.e. his argument against moral realism) spans the 90-odd pages prior to the parts Roko excerpted here, and seems independent of anything Haidt has to say about moral psychology.
What the excerpted parts are trying to do is something else entirely: viz. explain how we could think moral propositions have rational justifications, even though they don’t (because moral realism is false).
It looked to me like Greene’s focus on the dilemma responses was, “Look how people waive the requirement for judgments to have justification when it comes to moral issues” and that’s how I addressed it. I do not believe the participants were issuing waivers for their moral beliefs; rather, the question’s phrasing, combined with the surrounding culture, artificially constrain what counts as a valid response to the question. Not having been prepared to trace the source of such rarely-pondered questions so that they can dig out of hole they’ve been placed in, the participants stick to a position they find they can’t justify.
Not so impressive, I think, when you look at it that way.
Funny, that’s not what I took to be the point at all, and I don’t think that the case Greene is actually trying to make would be at all affected by your criticisms. He’s simply saying that:
we form moral beliefs of the sort “X is wrong” on the basis of intuitions given to us by evolution; but
because we want to believe that these beliefs are based on “good” reasons, and not merely gut instinct, we try to construct rationales for them after the fact.
Maybe you have some objection to this, but to me it seems fairly reasonable, and consistent with evidence about how we reason in a variety of other contexts.
Yes, and to prove this, he looks specifically at dilemmas people are presented with in which he can beat the argument he knows they’re going to use, presumably to show that people aren’t reasoning.
My response, then, is that all the experiments show is participant under-preparedness. As I pointed out before, if someone were more well-versed in evolutionary psychology and understood the root of such intuitions, they could give a better defense.
But if I don’t spend my days in situations where knowledge of the tradeoffs involved in incest is important, then yes, Greene is absolutely right, you can stump me on how I justify my beliefs.
But just the same, if I don’t spend my days as a satellite engineer, I won’t be able to defend the proposition that the earth is (very nearly) a sphere against an informed devil’s advocate, and will nevertheless persist in believing the earth is round.
Does that make my believe in the spherical earth a “gut instinct”? If so, fine. But then that deletes the negative significance Greene attributes to “gut instincts” and shows how the propositions they involve can still have objective truth.
At best, Greene’s thesis may be better off if he just scrapped the reference to the dilemma responses.
Sure, but that would still be a rationale generated after the fact, to justify a judgment not initially formed on the basis of those reasons. The point isn’t about whether we can come up with convincing reasons, post-hoc. It’s that, whether or not we end up finding them convincing, they’re still post-hoc. The fact that they don’t seem post-hoc internally is what allows us to maintain the illusion that our opinions were based on sound reasons all along.
This point has different implications depending on whether or not you already think moral realism is false (as Greene does). But it’s not intended (by Greene) as an argument that moral realism is false. (I feel like I’m repeating this point ad nauseam, but your claim that your spherical earth example “shows [gut instincts] can still have objective truth”, still seems to be based on the misapprehension that Greene is using this as an argument against objective moral truth. He’s not. He has separate arguments against that. His argument in this part assumes there is no objective moral truth.)
ETA:
I don’t want to be a dick about this, but this strikes me as a strong claim, coming from someone who doesn’t seem to have bothered to read the whole thesis. I’m not sure that Greene should be held responsible for the fact that you don’t seem to get his point, if you haven’t actually read most of his argument.
Seriously, the overall point you’re making is a good one, but the way you’re making it is, IMO, incredibly unfair to Greene. Given that Roko has actually made the argument you seem to be criticizing, I don’t really understand why it’s Greene who’s getting the beat up.
My point about the spherical earth was to show how his examples about “moral reasoning = post hoc rationalization of gut instinct” prove too much. That is, they could just as well show all our beliefs, even about the most mundane things, to be post-hoc rationalizations. So how is moral reasoning any worse off in this respect? You can trick people into looking ad hoc in morals; you can do the same for earth sphericity. It still says more about your setup than some morality-unique phenomenon you’ve discovered!
And I don’t want to be a dick either, but neither has Greene bothered to consider the most basic, disconfirmatory explanations for the responses subjects gave, explanations btw given by Haidt, someone he extensively quotes!
The phenomenon is probably not unique to morals, and Greene doesn’t need it to be. I don’t see how it would “prove too much” if it were.
What I’m trying to say is that they’re only disconfirmatory of a case Greene is not trying to make.
He most certainly does need to be, or else he’s just proven that every truth he does accept (or whatever concept isomorphic to truth he’s using) is also a post-hoc rationalization of gut instinct, in which case: what’s the point? Yes, my belief that “killing babies is wrong” is just some goofy intuition I’m trying to justify after involuntarily believing it … but so is Greene’s entire PhD thesis!
Isn’t it cute how he sticks to his thesis even when presented with contradictory evidence?
The point is that it explains how our sense that we have good reasons for things could be an illusion, not that it proves all our intuitions are unjustified.
But I’m just repeating myself now. I think I’m going to stop banging my head against this particular brick wall.
Yes, it explains quite well how our sense that we have good reasons for believing the earth is round could be an illusion.
Hey, don’t feel bad, I found some brick marks on my forehead too.
One last shot:
Um, well, yes. It does explain how that could be the case. And if we had independent reasons to think that statements about the earth being round had no truth value, then it would seem to be a reasonable explanation of how the misperception actually arose.
We don’t have such independent reasons in the round earth case; but Greene argues elsewhere that we do have such reasons in the case of moral judgments.
Your second sentence doesn’t follow. If people cling to a belief even after you’ve “rationally” “defeated” all their reasons for believing it, that is evidence for the believe being based on gut instinct, and evidence for our sense of having good reasons believing it illusory. It doesn’t matter that you can find “objective” evidence afterward; that subject’s belief, is gut instinct.
So everything is gut instinct, which thus sheds no light on the particular beliefs Greene is criticizing.
Or, you know, you could just go with the simple hypothesis Greene completely ignored, despite familiarity with Haidt, that it’s a silly setup designed to catch people unprepared.
I don’t understand your argument. Nor does it seem to me that you understand mine. It’s rather a shame that we appear to have wasted this much space utterly failing to communicate with each other, but at this point I doubt there’s much to be gained by wasting any more.
I give up. If you want to keep insisting Greene is making an argument that he isn’t, that’s your business. Doesn’t make it true.
I give up. You don’t seem to be listening.
I should make it more clear what I am am saying and what Greene is saying. This will be my second job for today.
Yes, it is true that Greene has an entire chapter devoted to giving metaphysical reasons why moral realism is false. I find that chapter unconvincing and long-winded. Essentially it boils down to “you can’t derive an ought from an is”.
Our evolved notions of disgust are there for complex reasons and they cannot be analyzed exclusively as helping society to solve group co-ordination. For example, it is unclear that people’s instinctive aversion to incest is required to prevent siblings having deformed babies with each other. In modern society methods such ad IVF would allow brother and sisters to have families together, and your point about family breakdown seems like a classic example of post-hoc justification if ever I saw one. Consider also the trolley cases. Why did evolution equip us with a tendency to avoid pushing the man off the bridge to save 5? What on earth does that have to do with prisoner’s dilemma?
It doesn’t need to be analyzed exclusively that way; it’s just one reason it’s there, a sort of focal point. “Eww! That guy eats snakes, he’s not like us [which will make it harder to punish him for defection]” Even if the reason for a tradition changes, it can still serve as a focal point to identify ingroup/outgroup.
It wasn’t my point, I was just parroting someone else I read on the matter.
In any case, I wasn’t endorsing the response I gave to the incest case. I was just showing what a response would look like from someone who was actually prepared. When people aren’t prepared for a question—i.e. the 99% of the population that doesn’t deeply reflect on their aversion to incest—yes, they’ll defend a position despite lack of a justification. But guess what: you’ll get the same thing if you ask people to justify their belief that the earth is round.
Greene reads too much into this failure to offer a justification for unusual dilemmas.
Again, I ask that you look at this from the perspective of an actual participant in the survey. That person is imagining grabbing a random person and tossing them off a bridge on very short notice. Numerous factors come into play, and the participant is going to consider them whether or not you assure them that they don’t matter.
Anyway, there are separate questions here:
1) Why did evolution equip us not to push someone off a bridge …? Because overt murder is a bad strategy, and the benefit isn’t tangible enough to outweigh it.
2) Why do people, on sober reflection of the issues, still consider it unethical to push the fat guy off the bridge? That’s easy. For one thing, people on a trolly consented to the risk of a crash in a way that someone standing on a bridge did not consent to some psycho f—suddenly deciding to push him off out of some bizarre sense of heroism. They also intuitively see such an extreme action as violating norms, which makes future actions harder to plan (“let’s take the long way around the bridge”). Etc. There are many reasons to distinguish the alternatives that the “clever” people who design the surveys aren’t taking into account.
Another reason the trolley problem is bogus is that if you were really in such a situation, you wouldn’t be sure your attempt to push the guy onto the track would even succeed. What if he saw you coming and resisted? Pushing a lever with 100% of success is different from pushing a guy with 87% estimated success and consequences if you fail.
Yes, I think this is a serious problem. All the ways I can think of to give you a very high chance of shoving the guy off mean that you don’t have to actually touch him, just (say) cut a rope, and that wouldn’t just make it more likely you’d succeed but introduce a counfounding effect of making it slightly less personal for you.
This is in part because I don’t really believe the explanation for non-shoving that says it has to do with not using people to an end; I think it’s just squeamishness about shoving someone with your own hands who was right next to you. If you were dropping them onto the tracks from a great distance by pulling a lever, I think people would pull the lever a lot more often. I haven’t tested this, of course.
Then make the lever probabilistic.
Well if you fine-tune the conditions of a hypothetical dilemma too much, people will tend to go with a useful heuristic: they will call bullshit.
Then add in some irrelevant noise considerations, such as “one of the people on the tracks who is about to die is your wife, but the guy you are going to push off is a war veteran”, etc. The dilemma doesn’t have to be fine tuned—a broad variety of choices all exhibit the same properties.
Read First, flame second: Greene writes on page 192:
Particular cultures exploit a subset of the possible moral intuitions we are prepared to experience, much in the way that particular languages exploit a subset of the possible phonemes we are prepared to recognize and pronounce (Haidt, pg. 827). According to Shweder and his colleagues (1997), these intuitions cluster around what he calls the “big three” domains of human moral phenomena: the “ethics of autonomy” which concerns rights, freedom, and individual welfare; the “ethics of community” which concerns the obligations of the individual to the larger community in the form of loyalty, respectfulness, modesty, self-control, etc.; and the “ethics of divinity” which is concerned with the maintenance of moral purity in the face of moral pollution
Greene cites Haidt a lot.
Well, I do apologize for not reading 192 pages before responding (in my defense, neither did anyone else). But the excerpt that you deemed representative of Greene’s work (and your commentary) did not show any assimilation of Haidt’s insights, so why should I have believed the rest of the dissertation would fill such a gaping hole?
The excerpt you just posted doesn’t seem to help either. Okay, he did in fact read Haidt. Do his responses to Haidt enable Greene to show how people are incorrectly viewing and classifying moral statements? If not, my original point stands.
I’m not sure you needed to. You just needed to read this bit of Roko’s excerpt properly:
The subsequent excerpts are aimed at the second purpose; not the first. (At least until Roko inteprets them in support of first at the end of the OP; but in context it seems reasonable to think they buttress the case against realism here, even if they don’t provide a stand-alone justification for it.)
I did. Just not very thoroughly.
You would probably benefit from reading Greene’s introduction chapter where he summarizes his argument.
I’d benefit even more if you wrote a better summary ;-)
Your comment reminds me of epicycles somehow, not sure if I just fail to appreciate this info adequately...
Agreed that this is true and important. It is odd to me that so many more people accept the ideas of behavioral economics and evolutionary psychology, yet don’t take the obvious leap to question whether our moral intuitions are a hard-wired module that evolved to serve our genetic interests, and thus feel like a window onto objective truth, yet are very very different from a sensory perception.
Here’s an example that may help introspectively honest people, partly inspired by a blog post of PJ Eby’s. Consider the social nature of guilt and shame. That is, how different it feels to do something “wrong” and get caught than to do the same thing if you are totally sure that no one will ever find out (and God won’t mind). Most people have some internal morality, and some noble few have a strong internal morality, but most people also have a very strong external morality, that is, “morality” that is really fear of getting caught by other humans. (Or God).
Morality based on getting caught makes no sense in terms of objective truth, but it makes total sense if morality is a way of guiding our behavior in the tribe so as to conform to social standards.
That’s signaling vs. your own desires, good images vs. good decisions.
Inferential distance issue perhaps?
Voted Down. Sorry, Roko.
I don’t find Greene’s arguments to be valuable or convincing. I won’t defend those claims here but merely point out that this post makes it extremely inconvenient to do so properly.
I would prefer concise reconstructions of important arguments over a link to a 377 page document and some lengthy quotes, many of which simply presuppose that certain important conclusions have already been established elsewhere in the dissertation.
As an exercise for the reader demonstrating my complaint, consider what it would take to work out whether Joshua Greene has any argument against this analysis of morality.
I agree that this is an important discussion to have but I don’t think this post helps us to engage in a productive discussion. Rather, it merely seems to handicap those who disagree with Greene on multiple points when they wish to participate in the discussion and does so without adequate justification.
“This analysis of morality” ,is badly in need of a summary itself, as Hanson pointed out. My summary would be that it is a version of the naturalistic fallacy, the idea that morality is whatever people think it is.
If you have any more suggestions as to what you’re after, do let me know.
I will attempt to do some condensing work at some point today given the comments that we have seen here.
Thanks.
There’s been a lot of discussion about that incest question, but I don’t think anyone’s come out and said whether they think the scenario represents a moral transgression. I wonder what folks here think of the scenario. In fact, let’s consider three scenarios:
the scenario as stated, with outcome specified (ETA: that is, we know for certain that no harm occurred), under the (default) assumption that these siblings were raised together;
the scenario with the outcome not specified, but these two have researched the question and, rightly or wrongly, have come to the conclusion that they are not more at risk than any non-sibling pairing; default assumption about raising;
as above, but the siblings only met as adults.
I’m going to rot13 my reply. V svaq Terrar’f cbfvgvba ntnvafg zbeny ernyvfz pbaivapvat. V guvax gur frpbaq fpranevb unf gur zbfg cbgragvny gb or ceboyrzngvp, fb V jvyy bayl nqqerff vg. V cersre crbcyr abg unez bguref, fb V jbhyq or ntnvafg gur rapbhagre bayl vs gur fvoyvatf unq na rkcrpgngvba bs qnzntvat bar nabgure. Va zl fpranevb, gur jbefg V pbhyq cbffvoyl fnl bs gurz vf gung gurve vasreraprf ner cbbe (naq V unira’g ybbxrq vagb gur vffhr va nal terng qrgnvy, fb V jbhyqa’g rira tb fb sne ng gur cerfrag gvzr).
As specified, boivbhfyl abg. V nz pncnoyr bs qvfgvathvfuvat zl crefbany fdhvpxl srryvatf sebz zl frafr bs evtug naq jebat.
Va nal bs gur guerr fpranevbf, ab genafterffvba. V qba’g haqrefgnaq jung bhgpbzr fcrpvsvrq zrnaf, ohg vg vf hayvxryl gb vasyhrapr zl qrpvfvba.
In the original scenario, it was specified that no harm at all occurred. Other discussants here brought up the point that even if that were so, if the siblings could reasonably expect that some harm might result, then the act might represent a moral transgression.
Would everyone please stop with ROT-13. It’s hard to follow. Thank you.
ab
lrf. Gur oebgure naq fvfgre pbhyq snyy vagb ebznagvp ybir. Gurl pbhyq arire or gbtrgure, fb guvf jbhyq pnhfr gurz gbezrag sbe gur erfg bs gurve yvirf.
But surely that’s their problem? And why can’t they be together?
The author seems to assert that this is a cultural phenomenon. I wonder if our attempts at unifying into a theory might not be instinctive, however. Would it then be so obvious that Moral Realism were false? We have an innate demand for consistency in our moral principles, that might allow us to say something like “racism is indeed objectively wrong, IF you believe that happiness is good.”
That being said, I don’t think it’s enough to save moral realism. The probability that moral realism is false has been a disturbing prospect for me lately, so I’m curious how he carves out a comfortable alternative.
Ok, i skimmed that a bit because it was fairly long, but here’s a few observations...
I think the default human behavior is to treat what we perceive as simply being what is out there (some people end up learning better, but most seem not to). This is true for everything we percieve, regardless of the subject matter—i.e. is nothing specific to morality.
I think it can—sometimes—be reasonable to stand by your intuition even if you can’t reason it out. Sometimes it takes time to figure out and articulate the reasoning. I am not trying to justify obstinance and “blind faith” here! Just saying that sometimes you can’t be expected to understand it straight away.
I don’t see any justification given, in what you quote from Greene, for the claim that there’s essentially no justification for morality.
Where do you think technical certainty comes from? How do you know to believe in logic? That’s all just highly distilled and reflectively processed forms of gut feeling.
forgot—there was another observation i had.… this one is just quick sketching:
regarding the idea that ‘moral properties’ are projected onto reality.
As our moral views are about things in reality, they are—amongst other things—forms of representation.
I think we need a solid understanding of what representations are, how they work, and thus exactly what it is they “refer” to in the world (and in what sense they do so), before we’ll really even have adequate language for talking about such issues in a precise, non-ambiguous fashion.
We don’t have such an understanding of representations at the moment.
I made a similar point in another comment on a post here dealing with the foundations of mathematics—that we’ll never properly understaand what mathematical statements are, and what in the world they are ‘about’ until we have a proper theory of representation.
I.e. i think that in both cases it is essentially the same thing holding us back.
No one said anything in response to that other comment, so I’m not sure what people think of such a position—I’d be quite curious to hear your opinion...
I think that in this specific case, evolutionary theory gives us enough to go with. Sure, we don’t understand how the entire human mind works, but I don’t think we need to to make Greene’s point.
So I guess my answer is “true but irrelevant”
There is a lot known in metamathematics and formal semantics, so you’d need to be more specific than that.
I don’t think there’s anything that comes close to giving a theoretical account of how mathematical statements are able to, in some sense, represent things in reality.
Again, you need to be more specific. If you assume certain models of reality (sometimes very reasonable for the real world), there are notions of describing/representing/simulating that system, finding or proving its properties. Physics, graphical models, etc.
that is exactly what you can’t assume if you want to explain the basis of representation.
How very Lewis Carrollean of you. Except that Carroll’s absurdities were always founded in logic and involved contradicting ‘common sense’, while this point is founded in common sense and contradicts logic.
It might be appropriate to act on such an intuition, and it is always appropriate to acknowledge its existence, but to ‘stand by it’ implies a claim to justification for the assertion, and that is clearly ruled out.
perhaps i should have phrased it as ‘...stand by your intuition for a while—even if you can’t reason it out initially—to give yourself an adequate chance to figure it out’
Any ‘figuring out’ is almost certainly going to produce an ad hoc Just-So Story.
Rationalists do not ignore their intuition. Nor do they trust it. If they don’t have a rational justification for a principle, they don’t assert it.
They don’t negate it, either.
No, “to figure out” in such a case would mean to find evidence that you suspected might exist, but weren’t sure about it.
If you are able to delay action while performing research motivated by your initial intuition, it can mean the difference between life and death. It has happened.
Finding new evidence is not “figuring out”. That refers to cognitive processing, not evidence discovery.
[edit: included quote]
that implies that the only correct intuition is one you can immediately rationally justify. how could progress in science happen if this was true?
science is basically a means to determine whether initial intuitions are true.
Wrong. An intuition is correct if it matches reality.
Accepting an intuition is only rational if it can be rationally justified, in which case the intuition isn’t needed, is it?
No, science is a methodology to determine whether an assertion about reality should be discarded. If it merely dealt with initial intuitions, it’s usefulness would be exhausted once the supply of initial intuitions had been run through.
I haven’t been able to work out your stance on philosophy of science—have you written about it? You seem at times to be a Popperian, like in the statement “science is a methodology to determine whether an assertion about reality should be discarded.”
But a Popperian would expect a scientist to accept an intuition and stand by it until it gets refuted—thus, “conjectures and refutations”. It sounds like you’d like propositions to only be spoken aloud if they’re logically deducible, and in that case there would be little use to try to empirically refute them.
Indeed, and that is why it’s wrong to say that attempts to rationally justify statements about reality are “almost certainly going to produce an ad hoc Just-So Story”.
I’m not sure what the second sentence there is taking “initial intuitions” to mean, but I don’t think there’s any substantial disagreement between our statements.
That’s not what I said.
I have no interest in helping to generate a Gish Explosion. Please confine yourself to addressing arguments I actually make, rather than straw men.
I’m not trying to be a jerk. Let me try to explain things, as I don’t think I communicated my point very clearly.
Just to start off, the quoted text is something you said.
But perhaps you are saying that the sentence I’ve embedded it in does not reflect what any thing you said? If so, it’s not mean to—it’s describing the point I was making, and to which your response included that quoted text.
Essentially, my last comment was trying to point out what I’d originally said had been misinterpreted in the Just-So Story bit, even though I didn’t do a great job of making this clear. Of course you may argue that you didn’t misinterpret me, but I certainly wasn’t trying to put words into anyones mouth.
No, the quoted text includes a fragment of what I said. Your statement about what I said is wrong as a whole.
The point you were making has nothing to do with the discussion that’s going on. That’s called a non-sequitur, and it’s a traditional rhetorical fallacy.
You wouldn’t agree that it is sometimes (usually, even!) a bad idea to throw out all rules in a system that you don’t see a use for, especially when you cannot claim to understand the system as a whole?
I may have to do a follow up post to spell it out.
If anyone can think of a way to condense this post, i.e. cut some stuff out, then let me know. I may give it a go myself later today.
I agreed but came to the opposite conclusion. Because I think that an ethics of naive moral intuition leads to worse outcomes than a fairly robust consequentialism/virtue ethics, I use the latter to trump the former.
Not that I disagree with your decision, but if you’re talking about “worse outcomes”, then aren’t you already assuming consequentialism during your evaluation of moral systems?
Yes. :) I really can’t imagine any other way of evaluating moral theories than “what would/could the world look like if we applied them.”
Minor quibble, interesting info :
“like expecting the orbits of the planets to be in the same proportion as the first 9 prime numbers or something. That which is produced by a complex, messy, random process is unlikely to have some low complexity description”
The particular example of the planet’s orbit is actually one where such a simple rule exists : see the law of Titius Bode
To a first approximation, yes. But sometimes people here underestimate the importance of culture in shaping morality. See the sub-discipline of cultural psychology, e.g. Richard Shewder. Jon Haidt and Joshua Greene rightly place more emphasis on the biological basis and evolutionary origins of morality, but there is still quite a bit of room for culture.
Greene, page 192.
So we have here a ‘guess’ about what people actually trained to think about morality might be thinking, as well as reasoning based on what people insufficiently trained in morality think.
If anything, this might serve as an argument that we need to actually treat ethics seriously, and teach it to everybody (not just philosophers).
He seems to regard intuition as though it’s not a sort of perception. That seems clearly wrong.
I was amazed to note that this was being presented in a philosophy department. But then, I don’t know what Princeton’s department is like.
It seems inconsistent to be denying moral realism and then making claims about what sort of language we should be using.
The thesis doesn’t claim “we should do X” but rather “X is an effective way to to reduce misunderstanding and conflict and change generally prevailing social conditions.” The inconsistency only arises if Greene bases in moral realism the assertion that reducing misunderstanding and conflict is desirable and the prevailing social conditions ought to be changed.
This is a good point. I think I need to do another post on “what you can still say once you’ve abandoned realism”.
Doesn’t the simple statement that “you can still agree on some things” pretty much sum it up?
Right—in this case he is claiming that, by our own subjective criteria, each one of us would benefit if we all used less realist language.
No, because not all norms are moral norms.
That’s actually a pretty contentious claim.
Non-mural? Nein!
Thank you for introducing the position of the thesis. I started reading it a couple of times, but never got very far.
It’s a fine effort for correcting stupidity, but the argument given here shouldn’t be carried too far either. For example, a lot of the misleading points in the above quotes can be revealed by analogizing prior with utility, as two sides of (non-objective) preference. Factual judgments are not fundamentally different from moral judgments on the subjective-objective scale, but factual judgments can often be so clear that an argument for them can be called “logical”, or the judgments themselves can be called “facts” or “observations”, different people will agree for such facts universally, given them the feel of objectivity. Justifications are a tool of communication as well as a tool for removing incoherence from intuitive judgments, the error lies in overly increasing the confidence of your conclusion based on rationalization, not in augmenting the pre-existing conclusion with an argument that helps the opponent arrive at the same conclusion. While it’s easier to see how preferences can be different for different people, and so the moral argument won’t succeed in evoking the same judgment as a result of communication because it’s incorrect for the other person, the same also applies to the prior, but the updated priors get a benefit of tons of data to arrive at the same decision in the absence of reflective consistency, while moral claims starve from lack of experience. And so on, and so forth.
P.S. You can improve formatting of the article by fixing the html source which is accessible from the article editor toolbar.
This seems like just another example of our tendency to (badly) rationalize whatever decisions we made subconsciously. We like to think we do things for good reasons, and if we don’t know the reasons we’ll make some up.
Is your basic thesis here that (a) because “morals” are, for the most part, based on something that is not rational, and (b) because most people will nonetheless do their best to justify even the most irrational of their morals, (c) there is therefore no point in trying to construct a morality that is based in rationality?
That’s what it sounds like, but I wanted to make sure I had it right before launching into commentary...
Today on BloggingHeads is a diavlog between Joshua Greene and Joshua Knobe: Percontations: Explaining and Appraising Moral Intuition.
And in this dialog, at the very beginning, Greene gives the same example with incest, for which he seems to assert that he considers it the right choice to dispose of moral intuition if you understand how it evolved as adaptation and understand that the reason it evolved is no longer present. This seems to be a total failure to see the Evolutionary-Cognitive Boundary, and is an approach that calls for, say, abolishing love if it no longer contributes to reproduction.
Although his position isn’t clear on this, maybe Roko can clarify, having read the thesis.
As a moral nihilist and/or egoist I tend to agree with the general sentiment of this article, though I would not take the tack of saying morality needs to be reformed—it’s so nonsensical and grinding it may be as possible (and more beneficial) to simply stop pretending magical rules and standards need apply.
I cannot understand what you are trying to say here. Perhaps try expanding your point to more than one sentence? What do you mean by “grinding”?
Sorry for the delay, I just checked this: I think actual morality tends to systematically bias behaviour and ideas about ‘social’ life which are contrary to fact and create all sorts of personal and interpersonal problems. I also think it gives far too strong a ‘presumption’ towards the benevolence of do-gooders, the sanity of ‘sticking to your guns, come what may’ and the wisdom of the popular.
There is a more general problem with cognitive dissonance and idea-consistency, due to the literal nonsensicality of most moral claims and sentiments. I also see that the alleged ‘gains’ from morality are frequently self-inflated, if not false to begin with; while the alternative—intellectual consistency and a recognition of purposeful action as aimed at subjective satisfaction—is vastly underrated, even by people of a ‘libertarian’ bent.
Most of this is less controversial here than elsewhere, with the exception of the reduction of all our goals to “subjective satisfaction”. Many LWers aspire to rational pursuit of our preferences, but with the important distinctions that
we recognize that the optimal long-term strategy can differ greatly from the optimal one-shot analysis, and
we have preferences about some anticipated world-states rather than just anticipated mind-states.
In response to this I say there is nothing about subjectivist satisfaction which prevents taking these (or anything else) into consideration. Further, I do not mean this in a utility-function sense, but rather ‘actual wants derived from valuation forecasts which result in intentionality’.
OK, I don’t understand that either.
Which is right and proper.
I’m very sympathetic to Greene’s views. In fact, I’m mid way through a philosophy PhD on evolution and morality myself (more at http://ockhamsbeard.wordpress.com/). However, I’d never read Greene’s entire dissertation—so thanks for the link.
On his views, there’s one point I’d like to raise. The reason why “people tend to believe that moral judgments are produced by reasoning even though this is not the case” goes back to the evolutionary roots of our moral intuitions.
Assuming that morality has evolved to encourage pro-social behaviour, it’s plausible that this feeling of objectivity has been selected for in order to encourage the spread of pro-social norms.
If, for example, moral judgements felt emotionally neutral, contingent and subjective – like a preference for chocolate over strawberry – we might be less inclined to voice approval or disapproval of others’ behaviour if it contravened our moral inclinations.
However, if it felt as though our moral inclinations were universal and categorical, we might be more inclined to judge others’ behaviour more strictly, making vocal proclamations of approval/disapproval and encouraging others to conform to the moral norms we held. This makes moral norms more ‘sticky’ than non-moral preferences, and helps them propagate amongst the community, thus improving group cohesion.
If this is true, the it only further undermines moral realism and moral rationalism.
ERROR: Postulation of group selection detected in paragraph 3.
Group seleciton indeed. Although I’m talking about both biological and cultural evolution working together, and in such a context, group selection is plausible (not that it’s entirely implausible in biological evolution).
See Haidt & Kesebir (2009) for an overview of group selection in this context.
In short, it’s not all about altruism but about group cohesion, and promoting group cohesion doesn’t need to come at the same cost that accompanies altruism.
If anyone can think of a way to condense this post, i.e. cut some stuff out, then let me know. I may give it a go myself later today.
As the first except notes, Greene makes three distinct arguments (though each builds on the previous ones), but the parts you’ve excerpted really only relate to the second. It might make more sense to do 1, 2 and 3 as separate posts.
thanks. I might do this. But I kind of wanted to nail this in one post.
Julie and Mark would have to be good at keeping their experiment secret. If they had a good experience together, having not harmed each other nor themselves, that golden rule, the trust of experience and emotions, could anyone else know the purity in their hearts? The sexuality of the question is interesting to me. When we are with a lover in the usual ways, are we really alone with them? It can take many years of trust to melt into love, have a change of consciousness, of unity. The incest question makes me think of these siblings, they could be each others favorite people, they might never know a truly comfortable love-making without each other. I believe if that were so, and they found a transcendent love, they have done better than maintain morality, but defeated morality.
A psychiatric patient can be asked if they have suicidal or homicidal urges, ideas of harming in general, become shredded between the options of hospitalization and facing these horrors alone, the moral tragedy being that a nurse or doctor could be reprimanded for allowing that person to cry on their shoulder.
I support the premise of this article, and my only conclusion is that we can have ‘golden rule moments’. Sometimes we have an option that is morally pure and no one can do the math to prove it, nor repeat it in a lab.
I believe we can get better at this. In a Nick Bostrom TED talk, he really convinced me that life could be ecstatic beyond imagination. Ethics could be easier. There’s so much negativity to cope with now.
But the intuition has to come from reason initially, no? Like, the first human ever who had thoughts about incest didn’t have a heuristic obtained from his parents or the society.
No. All mammals tend not to mate with those they’ve been raised with (i.e. those most likely to be close relatives). The heuristic is inborn, not learned. It predates conscious reasoning.
What makes this interesting is that we’re otherwise inclined to find those genetically similar to us very attractive. So (e.g.) siblings who are raised apart and encounter each other later in life have a much higher than average chance of at least wanting a sexual relationship. Stories like that are common in fiction and reality.
Really? That seems like it’s generally going to be counterproductive from an evolutionary perspective. It also doesn’t really gel with at least parts of what I’ve read about attraction and diversity in immune systems. E.g. research has apparently shown that:
ETA: This critical literature review presents a more mixed view of the evidence on preferences for MHC diversity in mates, so the actual picture may be more complicated than either of our original claims.
Abstract reproduced for those without access:
I dock Joshua Greene one point for obscuring the difference between ethics and morality.
As an ethicist, I must point out that “synonym for ‘ethics’” is the most commonly-used definition of ‘morality’ amongst people who use it formally. So much so that if you intend them to mean different things, you must explicitly define them in the paper. Every time.
ETA: The claim of ‘most commonly-used’ is an impression, not a statistically-derived fact
What is the difference between ethics and morality?
He does state that he will use the terms interchangeably in his introduction
That’s why I docked the point!
They are not the same!
They’re just words. You use ‘em one way, he uses ’em another way. If called on to justify this, he might say, “While it is sometimes useful to distinguish between them, by and large people use them interchangeably without degrading their communication, and that was the case here.”
But improper use necessarily degrades communication, because the distinction expressed by the words is eliminated when the two are treated as equivalent.
There’s no such thing as “just words” in rational argument. Words are the entirety of the process.
Actually laughed out loud at this one.
Morality: society’s agreement on what is ‘right’ and ‘wrong’
Ethics: rules or principles guiding behavior, doesn’t refer to society as a whole, often a code adopted by professional groups, possessed by individuals and entities within societies.
Ethics are not necessarily compatible with morality and do not necessarily address societal concepts of virtue and vice.
FWIW, here are some quick and dirty definitions, specifying how people use ‘Ethics’ and ‘Morality’:
Ethics = Morality
Morality: right/wrong; Ethics: the study of morality
Morality: rules about social behavior; Ethics: rules about behavior in general
Morality: what society thinks is right/wrong; Ethics: what’s actually right/wrong (this definition is sometimes used by fans of Nietzsche, who called himself an ‘amoralist’)
(contra 4) Morality: actual right/wrong; Ethics: mere opinion about how one should behave
Morality: ‘good’ vs ‘evil’; Ethics: ‘good’ vs ‘bad’
Oh, and the one you specified is employed as well.
There are some in here that seem outright contradictory, if not merely inconsistent. As such, I will continue to recommend explicitly defining ‘morality’ and ‘ethics’ when you don’t mean something like “What one has most reason to do or want” for both of them, or when it’s clear from context (for example, talking about the Professional Code of Ethics for some profession)
ETA: And according to the OED, the words both have their origins in something like “society’s expectations”
Interesting. On which Holy Tablet of True Definitions are these written?
The movie Election
The one entitled “The Dictionary”.
Another nice definition, which I prefer, has
Morality: rules a person wants to follow.
Ethics: rules a person wants others to follow.
Law: rules a person is willing to force others to follow.
...but no one but me uses those, either. Ah, well.
What’s authoritative in this question? Wikipedia, for example, lists multiple notions of morality, one of them “a synonym for ‘ethics’”, and ethics is said to be a philosophical discipline addressing questions about morality...
When there are multiple usage patterns which are mutually inconsistent and incompatible, usefulness becomes the most likely distinguishing factor.
It is useful to be able to distinguish between “societal standards of right and wrong” and “rules defining the proper way to do specific things”.