If you’re saying that we can’t trust the morality that evolution instilled into us to be actually good, then I’d say you are correct. If you’re saying that evolutionary ethicists believe that our brain has evolved an objective morality module, or somehow has latched onto a physics of objective morality… I would like to see examples of such arguments.
I am saying evolutionary morality as a whole is an invalid concept that is irrelevant to the subject of morality.
Actually, I can think of a minutely useful aspect of evolutionary morality: It tells us the evolutionary mechanism by which we got our current intuitions about morality is stupid because it is also the same mechanism that gave lions the intuition to (quoting the article I linked to) ‘slaughter their step children, or to behead their mates and eat them, or to attack neighboring tribes and tear their members to bits (all of which occurs in the natural kingdom)’.
If the mechanism by which we got our intuitions about morality is stupid, then we learn that our intuitions are completely irrelevant to the subject of morality. We also learn that we should not waste our time studying such a stupid mechanism.
I think I’d agree with everything you said up until the last sentence. Our brains are, after all, what we do our thinking with. So everything good and bad about them should be studied in detail. I’m sure you’d scoff if I turned your statement around on other poorly evolved human features. Like, say, there’s no point in studying the stupid mechanism of the human eye, and that the eye is completely irrelevant to the subject of optics.
Nature exerts selective pressure against organisms that have a poor perception of their surroundings, but there is no equivalent selective pressure when it comes to morality. This is the reason why the difference between the human eye and the lion eye is not as significant as the difference between the human intuitions about morality and the lion’s intuitions about morality.
If evolution made the perception of the surroundings as wildly variable as that of morality across different species, I would have made an argument saying that we should not trust what we perceive and we should not bother to learn how our senses work. Similarly, if evolution had exerted selective pressure against immoral organisms, I would have agreed that we should trust our intuitions.
Nature exerts selective pressure against organisms that have a poor perception of their surroundings, but there is no equivalent selective pressure when it comes to morality.
What an absolutely wild theory!
Humans domination of the planet is totally mediated by the astonishing level of cooperation between humans. Matt Ridley in The Rational Optimist even reports evidence that the ability to trade is an evolutionary adaptation of humans that is more unique to humans even than language is. Humans are able to live together in densities without killing each other in numbers orders of magnitude higher than other primates.
The evolutionarily value of an effective moral system seems overwhelmingly obvious to me, so it will be hard for me o realize where we might disagree.
My claims are: it is human morality that is the basic set of rules for humans to interact. With the right morality, our interaction leads to superior cooperation, superior productivity, and superior numbers. Any one of these would be enough to give the humans with the right morality an evolutionary advantage over humans with a less effective morality. For example if we didn’t have a highly developed respect for property, you couldn’t hire workers to do as many things: you would spend too much protecting your property from them. If we didn’t have such an orientation against doing violence against each other except under pretty limited circumstances, again, cooperative efforts would suffer a lot.
This is the reason why the difference between the human eye and the lion eye is not as significant as the difference between the human intuitions about morality and the lion’s intuitions about morality.
It certainly seems the case that our moral intutions align much better with dogs and primates than with lions.
But plenty of humans have decimated their enemies, and armies even till this day tend to rape every woman in sight.
But of course evolution made perception of the surroundings as wildly variable as morality. There are creatures with zero perception, and creatures with better vision (or heat perception, or magnetic or electric, or hearing or touch...) than we’ll ever have. Even if humans were the only species with morality, arguing about variability doesn’t hold much weight. How many things metabolize in arsenic? There’s all kinds of singular evolutions that this argument seems to be unable to handle just because of the singularity of the case.
So I certainly agree that that facts about evolution don’t imply moral facts. But the way you talk seems to imply you think there are other ways to discover moral facts. But I doubt there are objective justifications for human morality that are any better than justifications for lion morality. In terms of what we actually end up valuing biological evolution (a long with cultural transmission) are hugely important. Certainly a brain module for altruism is not a confirmation of altruistic normative facts. But if we want to learn about human morality as it actually exists (say, for the purpose of programming something to act accordingly) it seems very unlikely that we would want to neglect this research area.
I initially wrote up a bit of a rant, but I just want to ask a question for clarification:
Do you think that evolutionary ethics is irrelevant because the neuroscience of ethics and neuroeconomics are much better candidates for understanding what humans value (and therefore for guiding our moral decisions)?
I’m worried that you don’t because the argument you supplied can be augmented to apply there as well: just replace “genes” with “brains”. If your answer is a resounding ‘no’, I have a lengthy response. :)
IMO, what each of us values to themselves may be relevant to morality. What we intuitively value for others is not.
I have to admit I have not read the metaethics sequences. From your tone, I feel I am making an elementary error. I am interested in hearing your response.
I’m not sure if it’s elementary, but I do have a couple of questions first. You say:
what each of us values to themselves may be relevant to morality
This seems to suggest that you’re a moral realist. Is that correct? I think that most forms of moral realism tend to stem from some variant of the mind projection fallacy; in this case, because we value something, we treat it as though it has some objective value. Similarly, because we almost universally hold something to be immoral, we hold its immorality to be objective, or mind independent, when in fact it is not. The morality or immorality of an action has less to do with the action itself than with how our brains react to hearing about or seeing the action.
Taking this route, I would say that not only are our values relevant to morality, but the dynamic system comprising all of our individual value systems is an upper-bound to what can be in the extensional definition of “morality” if “morality” is to make any sense as a term. That is, if something is outside of what any of us can ascribe value to, then it is not moral subject matter, and furthermore; what we can and do ascribe value to is dictated by neurology.
Not only that, but there is a well-known phenomenon that makes naive (without input from neuroscience) moral decision making: the distinction between liking and wanting. This distinction crops up in part because the way we evaluate possible alternatives is lossy—we can only use a very finite amount of computational power to try and predict the effects of a decision or obtaining a goal, and we have to use heuristics to do so. In addition, there is the fact that human valuation is multi-layered—we have at least three valuation mechanisms, and their interaction isn’t yet fully understood. Also see Glimcher et al. Neuroeconomics and the Study
of Valuation From that article:
10 years of work (that) established the existence of at least three interrelated subsystems in these
brain areas that employ distinct mechanisms for learning and representing value and that interact to produce the valuations that guide choice (Dayan & Balliene, 2002; Balliene, Daw, & O’Doherty, 2008; Niv & Montague, 2008).
The mechanisms for choice valuation are complicated, and so are the constraints for human ability in decision making. In evaluating whether an action was moral, it’s imperative to avoid making the criterion “too high for humanity”.
One last thing I’d point out has to do with the argument you link to, because you do seem to be being inconsistent when you say:
What we intuitively value for others is not.
Relevant to morality, that is. The reason is that the argument cited rests entirely on intuition for what others value. The hypothetical species in the example is not a human species, but a slightly different one.
I can easily imagine an individual from species described along the lines of the author’s hypothetical reading the following:
If it is good because it is loved by our genes, then anything that comes to be loved by the genes can become good. If humans, like lions, had a disposition to noteat their babies, or to behead their mates and eat them, or to attack neighboring tribes and tear their members to bits (all of which occurs in the natural kingdom), then these things would not be good. We could not brag that humans evolved a disposition to be moral because morality would be whatever humans evolved a disposition to do.
And being horrified at the thought of such a bizarre and morally bankrupt group. I strongly recommend you read the sequence I linked to in the quite if you haven’t. It’s quite an interesting (relevant) short story.
So, I have a bit more to write but I’m short on time at the moment. I’d be interested to hear if there is anything you find particularly objectionable here though.
What would probably help is if you said what you thought was relevant to morality, rather than only telling us about things you think are irrelevant. It would make it easier to interpret your irrelevancies.
Evolutionary Biology might be good at telling us what we value. However, as GE Moore pointed out, ethics is about what we SHOULD value. What evolutionary ethics will teach us is that our mind/brains are maleable. Our values are not fixed.
And the question of what we SHOULD value makes sense because our brains are malleable. Our desires—just like our beliefs—are not fixed. They are learned. So, the question arises, “Given that we can mold desires into different forms, what SHOULD we mold them into?”
Besides, evolutionary ethics is incoherent. “I have evolved a disposition to harm people like you; therefore, you deserve to be harmed.” How does a person deserve punishment just because somebody else evolved a disposition to punish him.
Do we solve the question of gay marriage by determining whether the accusers actually have a genetic disposition to kill homosexuals? And if we discover they do, we leap to the conclusion that homosexuals DESERVE to be killed?
Why evolve a disposition to punish? That makes no sense.
What is this practice of praise and condemnation that is central to morality? Of deserved praise and condemnation? Does it make sense to punish somebody for having the wrong genes?
What, according to evolutionary ethics, is the role of moral argument?
Does genetics actually explain such things as the end of slavery, and a woman’s right to vote? Those are very fast genetic changes.
The reason that the Euthyphro argument works against evolutionary ethics because—regardless of what evolution can teach us about what we do value, it teaches us that our values are not fixed. Because values are not genetically determined, there is a realm in which it is sensible to ask about what we should value, which is a question that evolutionary ethics cannot answer. Praise and condemnation are central to our moral life precisely because these are the tools for shaping learned desires—resulting in an institution where the question of the difference between right and wrong is the question of the difference between what we should and should not praise or condemn.
Its lunchtime so for fun I will answer some of your rhetorical questions.
Evolutionary Biology might be good at telling us what we value. However, as GE Moore pointed out, ethics is about what we SHOULD value.
Unless GE Moore is either an alien or an artificial intelligence, he is telling us what we should value from a human brain that values things based on its evolution. How will he be able to make any value statement and tell you with a straight face that his valuing that thing has NOTHING to do with his evolution?
Besides, evolutionary ethics is incoherent. “I have evolved a disposition to harm people like you; therefore, you deserve to be harmed.” How does a person deserve punishment just because somebody else evolved a disposition to punish him.
My disposition to harm people is triggered approximately proportionally to my judgement that this person has or will harm me or someone I care about. My disposition doesn’t speak, but neither does my disposition to presume based on experience that the sun will rise tomorrow. What does speak says about the second that being able to predict the future based on the past is an incredibly effective way to understand the universe, so much so that it seems the niverse’s continuity from the past to the future is a feature of the universe, not just a feature of the tools my mind has developed to understand the universe. About my incoherent disposition to harm someone who is threatening my wife or my sister, I would invite you to consider life in a society where this disposition did not exist. Violent thieves would run roughshod over the non-violent, who would stand around naked, starving, and puzzled: “what can we do about this after all?”
Do we solve the question of gay marriage by determining whether the accusers actually have a genetic disposition to kill homosexuals? And if we discover they do, we leap to the conclusion that homosexuals DESERVE to be killed?
This sentence seems somewhat incoherent but I’ll address what I think are some of the interesting issues it evokes, if not quite brings up.
First, public open acceptance of homosexuality is a gigantic and modern phenomenon. If nothing else, it proves that an incredibly large number of humans DO NOT have any such genetic urge to kill homosexuals, or even to give them dirty looks when walking by them on the street for that matter. So if there is a lesson about concluding moral “oughts” from moral “is-es” here, it is that anybody who previously conclude that homicidal hatred of homosexuals was part of human genetic moral makeup was using insanely flawed methods for understanding genetic morality.
I would say that all attempts to derive ought from is, to design sensible rules for humans living and working together, should be approached with a great deal of caution and humility, especially given the clear tendency towards erroneous conclusions that may also be in our genes. But I would also say that any attempt at determining useful and valuable rules for living and working together which completely ignores what we might learn from evolutionary morality is “wrong” to do so, that any additional human suffering that occurs because these people willfully ignore useful scientific facts is blood on their hands.
What is this practice of praise and condemnation that is central to morality? Of deserved praise and condemnation? Does it make sense to punish somebody for having the wrong genes?
Well, it makes sense to restrict the freedom of anybody who does more social harm than social good if left unrestrained. It doesn’t matter whether the reason is bad genes or some other reason. We shoot a lion who is loose and killing suburbanites. You don’t have to call it punishment, but what if you do? It is still a gigantically sensible and useful thing to do.
Many genes produce tendencies in people that are moderated by feedback from the world. I have a tendency to be really good at linear algebra and math and building electronic things that work. Without education this might have gone unnoticed. Without positive accolades, I might have preferred to play the electric guitar. Perhaps someone who has a tendency to pick up things he likes and keep them, or to strike out at people who piss him off, will have behavior which is also moderated by his genes AND his environment. Perhaps training him to get along with other people will be the difference between an incarcerated petty thief and a talented corporate raider or linebacker.
The thing that is central to morality is inducing moral behavior. Praise and condemnation are not central, they are two techniques which may or may not help meet that end, and given the fact that they have been enhanced by evolution, I’m guessing they actually do work in a lot of circumstances.
What, according to evolutionary ethics, is the role of moral argument?
Moral argment writ small is a band of humans hashing out how they will get along running on the savannah. This has probably been going on long enough to be evolutionarily meaningful. How do we share the meat and the furs from the animal we cooperatively killed? Who gets to have sex with whom, and how do we change that result to something we like better? What do we do about that guy who keeps pooping in the water supply? The evidence that “talking about it” is useful is the incredibly high level of cooperative organization that humans demonstrate as compared to any other animal. Social insects are the only creatures I know of that even come close, and their high levels of organizations took 10s or 100s of thousands of years to refine, while the productivity of the human corporation or anything we have using a steam engine or a transistor has all been accomplished in 100 years or so.
Does genetics actually explain such things as the end of slavery, and a woman’s right to vote? Those are very fast genetic changes.
Does genetics explain an artificial heart? The 4 minute mile? Walking on the moon without dying? The heart is evolved, as is our ability to run, and our need to breathe and for gravity. Without knowing what the answer exactly is, these non-moral and very recent examples bear a similar relationship to our genetics as do the recent moral examples in the question. Sorry to not answer this one, except by tangent.
Because values are not genetically determined, there is a realm in which it is sensible to ask about what we should value, which is a question that evolutionary ethics cannot answer.
What can answer it if evolutionary ethics cannot? A science fiction story like Jesus, Moses, or Scientology that everybody decides to pretend is a morally relevant truth?
ALL your moral questioning and intuitions about right and wrong, about the ability or lack of it for evolutionary investigations to provide answers, it seems to me it is all coming from your evolved brain interacting with the world. Which is what the brain evolved to do. By what reasoning are you able to separate your moral intuitions, which you seem to think are useful for evolving your moral values, from the moral intuitions your evolved brain makes?
Are you under the impression that it is the moral CONCLUSIONS that are evolved? It is not. The brain is a mechanism, some sort of information processor. Evolution occurs when a processor of one type outcompetes a processor of another type. The detailed moral conclusions reached by the mechanism that evolved are just that: new results coming from an old machine from some mixture of inputs, some of which are novel and some of which are same-old-same-old.
Praise and condemnation are central to our moral life precisely because these are the tools for shaping learned desires—resulting in an institution where the question of the difference between right and wrong is the question of the difference between what we should and should not praise or condemn.
And you think this is somehow an alternative to an evolutionary explanation? Go watch the neurobiologists sussing out all the different ways that learning takes place in brains and see if you can tell me where the evolutionary part stopped, because to me it looked like learning algorithms are just beautifully evolved with a compactness which is exceptional, and still unduplicated in silicon, which is millions of times faster than brains.
That was fun. Lunch is over. Back to writing android apps.
Does it make sense to punish somebody for having the wrong genes?
This depends on what you mean by “punish”. If by “punish” you mean socially ostracize and disallow mating privileges, I can think of situations in which it could make evolutionary sense, although as we no longer live in our ancestral environment and have since developed a complex array of cultural norms, it no longer makes moral sense.
In any event, what you’ve written is pretty much orthogonal to what I’ve said; I’m not defending what you’re calling evolutionary ethics (nor am I aware of indicating that I hold that view, if anything I took it to be a bit of a strawman). Descriptive evolutionary ethics is potentially useful, but normative evolutionary ethics commits the naturalistic fallacy (as you’ve pointed out), and I think the Euthyphro argument is fairly weak in comparison to that point.
The view you’re attacking doesn’t seem to take into account the interplay between genetic, epigenetic and cultural/mememtic factors in how moral intuitions are shaped and can be shaped. It sounds like a pretty flimsy position, and I’m a bit surprised that any ethicist actually holds it. I would be interested if you’re willing to cite some people who currently hold the viewpoint you’re addressing.
The reason that the Euthyphro argument works against evolutionary ethics because—regardless of what evolution can teach us about what we do value, it teaches us that our values are not fixed.
Well, really it’s more neuroscience that tells us that our values aren’t fixed (along with how the valuation works). It also has the potential to tell us to what degree our values are fixed at any given stage of development, and how to take advantage of the present degree of malleability.
Because values are not genetically determined, there is a realm in which it is sensible to ask about what we should value, which is a question that evolutionary ethics cannot answer.
Of course; under your usage of evolutionary ethics this is clearly the case. I’m not sure how this relates to my comment, however.
Praise and condemnation are central to our moral life precisely because these are the tools for shaping learned desires
I agree that it’s pretty obvious that social reinforcement is important because it shapes moral behavior, but I’m not sure if you’re trying to make a central point to me, or just airing your own position regardless of the content of my post.
If you’re saying that we can’t trust the morality that evolution instilled into us to be actually good, then I’d say you are correct. If you’re saying that evolutionary ethicists believe that our brain has evolved an objective morality module, or somehow has latched onto a physics of objective morality… I would like to see examples of such arguments.
I am saying evolutionary morality as a whole is an invalid concept that is irrelevant to the subject of morality.
Actually, I can think of a minutely useful aspect of evolutionary morality: It tells us the evolutionary mechanism by which we got our current intuitions about morality is stupid because it is also the same mechanism that gave lions the intuition to (quoting the article I linked to) ‘slaughter their step children, or to behead their mates and eat them, or to attack neighboring tribes and tear their members to bits (all of which occurs in the natural kingdom)’.
If the mechanism by which we got our intuitions about morality is stupid, then we learn that our intuitions are completely irrelevant to the subject of morality. We also learn that we should not waste our time studying such a stupid mechanism.
I think I’d agree with everything you said up until the last sentence. Our brains are, after all, what we do our thinking with. So everything good and bad about them should be studied in detail. I’m sure you’d scoff if I turned your statement around on other poorly evolved human features. Like, say, there’s no point in studying the stupid mechanism of the human eye, and that the eye is completely irrelevant to the subject of optics.
Nature exerts selective pressure against organisms that have a poor perception of their surroundings, but there is no equivalent selective pressure when it comes to morality. This is the reason why the difference between the human eye and the lion eye is not as significant as the difference between the human intuitions about morality and the lion’s intuitions about morality.
If evolution made the perception of the surroundings as wildly variable as that of morality across different species, I would have made an argument saying that we should not trust what we perceive and we should not bother to learn how our senses work. Similarly, if evolution had exerted selective pressure against immoral organisms, I would have agreed that we should trust our intuitions.
What an absolutely wild theory!
Humans domination of the planet is totally mediated by the astonishing level of cooperation between humans. Matt Ridley in The Rational Optimist even reports evidence that the ability to trade is an evolutionary adaptation of humans that is more unique to humans even than language is. Humans are able to live together in densities without killing each other in numbers orders of magnitude higher than other primates.
The evolutionarily value of an effective moral system seems overwhelmingly obvious to me, so it will be hard for me o realize where we might disagree.
My claims are: it is human morality that is the basic set of rules for humans to interact. With the right morality, our interaction leads to superior cooperation, superior productivity, and superior numbers. Any one of these would be enough to give the humans with the right morality an evolutionary advantage over humans with a less effective morality. For example if we didn’t have a highly developed respect for property, you couldn’t hire workers to do as many things: you would spend too much protecting your property from them. If we didn’t have such an orientation against doing violence against each other except under pretty limited circumstances, again, cooperative efforts would suffer a lot.
It certainly seems the case that our moral intutions align much better with dogs and primates than with lions.
But plenty of humans have decimated their enemies, and armies even till this day tend to rape every woman in sight.
But of course evolution made perception of the surroundings as wildly variable as morality. There are creatures with zero perception, and creatures with better vision (or heat perception, or magnetic or electric, or hearing or touch...) than we’ll ever have. Even if humans were the only species with morality, arguing about variability doesn’t hold much weight. How many things metabolize in arsenic? There’s all kinds of singular evolutions that this argument seems to be unable to handle just because of the singularity of the case.
So I certainly agree that that facts about evolution don’t imply moral facts. But the way you talk seems to imply you think there are other ways to discover moral facts. But I doubt there are objective justifications for human morality that are any better than justifications for lion morality. In terms of what we actually end up valuing biological evolution (a long with cultural transmission) are hugely important. Certainly a brain module for altruism is not a confirmation of altruistic normative facts. But if we want to learn about human morality as it actually exists (say, for the purpose of programming something to act accordingly) it seems very unlikely that we would want to neglect this research area.
I initially wrote up a bit of a rant, but I just want to ask a question for clarification:
Do you think that evolutionary ethics is irrelevant because the neuroscience of ethics and neuroeconomics are much better candidates for understanding what humans value (and therefore for guiding our moral decisions)?
I’m worried that you don’t because the argument you supplied can be augmented to apply there as well: just replace “genes” with “brains”. If your answer is a resounding ‘no’, I have a lengthy response. :)
IMO, what each of us values to themselves may be relevant to morality. What we intuitively value for others is not.
I have to admit I have not read the metaethics sequences. From your tone, I feel I am making an elementary error. I am interested in hearing your response.
Thanks
I’m not sure if it’s elementary, but I do have a couple of questions first. You say:
This seems to suggest that you’re a moral realist. Is that correct? I think that most forms of moral realism tend to stem from some variant of the mind projection fallacy; in this case, because we value something, we treat it as though it has some objective value. Similarly, because we almost universally hold something to be immoral, we hold its immorality to be objective, or mind independent, when in fact it is not. The morality or immorality of an action has less to do with the action itself than with how our brains react to hearing about or seeing the action.
Taking this route, I would say that not only are our values relevant to morality, but the dynamic system comprising all of our individual value systems is an upper-bound to what can be in the extensional definition of “morality” if “morality” is to make any sense as a term. That is, if something is outside of what any of us can ascribe value to, then it is not moral subject matter, and furthermore; what we can and do ascribe value to is dictated by neurology.
Not only that, but there is a well-known phenomenon that makes naive (without input from neuroscience) moral decision making: the distinction between liking and wanting. This distinction crops up in part because the way we evaluate possible alternatives is lossy—we can only use a very finite amount of computational power to try and predict the effects of a decision or obtaining a goal, and we have to use heuristics to do so. In addition, there is the fact that human valuation is multi-layered—we have at least three valuation mechanisms, and their interaction isn’t yet fully understood. Also see Glimcher et al. Neuroeconomics and the Study of Valuation From that article:
The mechanisms for choice valuation are complicated, and so are the constraints for human ability in decision making. In evaluating whether an action was moral, it’s imperative to avoid making the criterion “too high for humanity”.
One last thing I’d point out has to do with the argument you link to, because you do seem to be being inconsistent when you say:
Relevant to morality, that is. The reason is that the argument cited rests entirely on intuition for what others value. The hypothetical species in the example is not a human species, but a slightly different one.
I can easily imagine an individual from species described along the lines of the author’s hypothetical reading the following:
And being horrified at the thought of such a bizarre and morally bankrupt group. I strongly recommend you read the sequence I linked to in the quite if you haven’t. It’s quite an interesting (relevant) short story.
So, I have a bit more to write but I’m short on time at the moment. I’d be interested to hear if there is anything you find particularly objectionable here though.
What would probably help is if you said what you thought was relevant to morality, rather than only telling us about things you think are irrelevant. It would make it easier to interpret your irrelevancies.
Evolutionary Biology might be good at telling us what we value. However, as GE Moore pointed out, ethics is about what we SHOULD value. What evolutionary ethics will teach us is that our mind/brains are maleable. Our values are not fixed.
And the question of what we SHOULD value makes sense because our brains are malleable. Our desires—just like our beliefs—are not fixed. They are learned. So, the question arises, “Given that we can mold desires into different forms, what SHOULD we mold them into?”
Besides, evolutionary ethics is incoherent. “I have evolved a disposition to harm people like you; therefore, you deserve to be harmed.” How does a person deserve punishment just because somebody else evolved a disposition to punish him.
Do we solve the question of gay marriage by determining whether the accusers actually have a genetic disposition to kill homosexuals? And if we discover they do, we leap to the conclusion that homosexuals DESERVE to be killed?
Why evolve a disposition to punish? That makes no sense.
What is this practice of praise and condemnation that is central to morality? Of deserved praise and condemnation? Does it make sense to punish somebody for having the wrong genes?
What, according to evolutionary ethics, is the role of moral argument?
Does genetics actually explain such things as the end of slavery, and a woman’s right to vote? Those are very fast genetic changes.
The reason that the Euthyphro argument works against evolutionary ethics because—regardless of what evolution can teach us about what we do value, it teaches us that our values are not fixed. Because values are not genetically determined, there is a realm in which it is sensible to ask about what we should value, which is a question that evolutionary ethics cannot answer. Praise and condemnation are central to our moral life precisely because these are the tools for shaping learned desires—resulting in an institution where the question of the difference between right and wrong is the question of the difference between what we should and should not praise or condemn.
Its lunchtime so for fun I will answer some of your rhetorical questions.
Unless GE Moore is either an alien or an artificial intelligence, he is telling us what we should value from a human brain that values things based on its evolution. How will he be able to make any value statement and tell you with a straight face that his valuing that thing has NOTHING to do with his evolution?
My disposition to harm people is triggered approximately proportionally to my judgement that this person has or will harm me or someone I care about. My disposition doesn’t speak, but neither does my disposition to presume based on experience that the sun will rise tomorrow. What does speak says about the second that being able to predict the future based on the past is an incredibly effective way to understand the universe, so much so that it seems the niverse’s continuity from the past to the future is a feature of the universe, not just a feature of the tools my mind has developed to understand the universe. About my incoherent disposition to harm someone who is threatening my wife or my sister, I would invite you to consider life in a society where this disposition did not exist. Violent thieves would run roughshod over the non-violent, who would stand around naked, starving, and puzzled: “what can we do about this after all?”
This sentence seems somewhat incoherent but I’ll address what I think are some of the interesting issues it evokes, if not quite brings up.
First, public open acceptance of homosexuality is a gigantic and modern phenomenon. If nothing else, it proves that an incredibly large number of humans DO NOT have any such genetic urge to kill homosexuals, or even to give them dirty looks when walking by them on the street for that matter. So if there is a lesson about concluding moral “oughts” from moral “is-es” here, it is that anybody who previously conclude that homicidal hatred of homosexuals was part of human genetic moral makeup was using insanely flawed methods for understanding genetic morality.
I would say that all attempts to derive ought from is, to design sensible rules for humans living and working together, should be approached with a great deal of caution and humility, especially given the clear tendency towards erroneous conclusions that may also be in our genes. But I would also say that any attempt at determining useful and valuable rules for living and working together which completely ignores what we might learn from evolutionary morality is “wrong” to do so, that any additional human suffering that occurs because these people willfully ignore useful scientific facts is blood on their hands.
Well, it makes sense to restrict the freedom of anybody who does more social harm than social good if left unrestrained. It doesn’t matter whether the reason is bad genes or some other reason. We shoot a lion who is loose and killing suburbanites. You don’t have to call it punishment, but what if you do? It is still a gigantically sensible and useful thing to do.
Many genes produce tendencies in people that are moderated by feedback from the world. I have a tendency to be really good at linear algebra and math and building electronic things that work. Without education this might have gone unnoticed. Without positive accolades, I might have preferred to play the electric guitar. Perhaps someone who has a tendency to pick up things he likes and keep them, or to strike out at people who piss him off, will have behavior which is also moderated by his genes AND his environment. Perhaps training him to get along with other people will be the difference between an incarcerated petty thief and a talented corporate raider or linebacker.
The thing that is central to morality is inducing moral behavior. Praise and condemnation are not central, they are two techniques which may or may not help meet that end, and given the fact that they have been enhanced by evolution, I’m guessing they actually do work in a lot of circumstances.
Moral argment writ small is a band of humans hashing out how they will get along running on the savannah. This has probably been going on long enough to be evolutionarily meaningful. How do we share the meat and the furs from the animal we cooperatively killed? Who gets to have sex with whom, and how do we change that result to something we like better? What do we do about that guy who keeps pooping in the water supply? The evidence that “talking about it” is useful is the incredibly high level of cooperative organization that humans demonstrate as compared to any other animal. Social insects are the only creatures I know of that even come close, and their high levels of organizations took 10s or 100s of thousands of years to refine, while the productivity of the human corporation or anything we have using a steam engine or a transistor has all been accomplished in 100 years or so.
Does genetics explain an artificial heart? The 4 minute mile? Walking on the moon without dying? The heart is evolved, as is our ability to run, and our need to breathe and for gravity. Without knowing what the answer exactly is, these non-moral and very recent examples bear a similar relationship to our genetics as do the recent moral examples in the question. Sorry to not answer this one, except by tangent.
What can answer it if evolutionary ethics cannot? A science fiction story like Jesus, Moses, or Scientology that everybody decides to pretend is a morally relevant truth?
ALL your moral questioning and intuitions about right and wrong, about the ability or lack of it for evolutionary investigations to provide answers, it seems to me it is all coming from your evolved brain interacting with the world. Which is what the brain evolved to do. By what reasoning are you able to separate your moral intuitions, which you seem to think are useful for evolving your moral values, from the moral intuitions your evolved brain makes?
Are you under the impression that it is the moral CONCLUSIONS that are evolved? It is not. The brain is a mechanism, some sort of information processor. Evolution occurs when a processor of one type outcompetes a processor of another type. The detailed moral conclusions reached by the mechanism that evolved are just that: new results coming from an old machine from some mixture of inputs, some of which are novel and some of which are same-old-same-old.
And you think this is somehow an alternative to an evolutionary explanation? Go watch the neurobiologists sussing out all the different ways that learning takes place in brains and see if you can tell me where the evolutionary part stopped, because to me it looked like learning algorithms are just beautifully evolved with a compactness which is exceptional, and still unduplicated in silicon, which is millions of times faster than brains.
That was fun. Lunch is over. Back to writing android apps.
First, I do have a couple of nitpicks:
That depends. See here for instance.
This depends on what you mean by “punish”. If by “punish” you mean socially ostracize and disallow mating privileges, I can think of situations in which it could make evolutionary sense, although as we no longer live in our ancestral environment and have since developed a complex array of cultural norms, it no longer makes moral sense.
In any event, what you’ve written is pretty much orthogonal to what I’ve said; I’m not defending what you’re calling evolutionary ethics (nor am I aware of indicating that I hold that view, if anything I took it to be a bit of a strawman). Descriptive evolutionary ethics is potentially useful, but normative evolutionary ethics commits the naturalistic fallacy (as you’ve pointed out), and I think the Euthyphro argument is fairly weak in comparison to that point.
The view you’re attacking doesn’t seem to take into account the interplay between genetic, epigenetic and cultural/mememtic factors in how moral intuitions are shaped and can be shaped. It sounds like a pretty flimsy position, and I’m a bit surprised that any ethicist actually holds it. I would be interested if you’re willing to cite some people who currently hold the viewpoint you’re addressing.
Well, really it’s more neuroscience that tells us that our values aren’t fixed (along with how the valuation works). It also has the potential to tell us to what degree our values are fixed at any given stage of development, and how to take advantage of the present degree of malleability.
Of course; under your usage of evolutionary ethics this is clearly the case. I’m not sure how this relates to my comment, however.
I agree that it’s pretty obvious that social reinforcement is important because it shapes moral behavior, but I’m not sure if you’re trying to make a central point to me, or just airing your own position regardless of the content of my post.
Yay for my favorite ethicist signing up for LessWrong!