I don’t know if you mean to come across this way, but the way you have written this makes it sound like you think utilitarians are cynically pretending to believe in utilitarianism to look good to others, but don’t really believe it in their heart of hearts. I don’t think this is true in most cases, I think utilitarians are usually sincere, and most failures to live up to their beliefs can be explained by akrasia.
If you want a plausabile theory as to how natural selection could produce sincere altruism, look at it from a game-theoretic perspective. People who could plausibly signal altruism and trustworthiness would get huge evolutionary gains because they could attract trading partners more easily. One of the more effective ways to signal that you possess a trait is to actually possess it. One of the most effective ways to signal you are altruistic and trustworthy is to actually be altruistic and trustworthy. So it’s plausible that humans evolved to be genuinely nice, trustworthy, and altruistic, probably because the evolutionary gains from getting trade partners to trust them outweighed the evolutionary losses from sacrificing for others. Akrasia can be seen as an evolved mechanism that sabotages our altruism in an ego-dystonic way, so that we can truthfully say we’re altruists without making maladaptive sacrifices for others.
Of course, the fact that our altruistic tendencies may have evolved from genetically selfish reasons gives us zero reason to behave in a selfish fashion today, except possibly as a means to prevent natural selection from removing altruism from existence. We are not our genes.
I think utilitarians are usually sincere, and most failures to live up to their beliefs can be explained by akrasia.
If all you mean by “sincere” is not explicitly thinking of something as deceptive, that seems right to me, but if “sincere” is supposed to mean “thoughts and actions can be well-predicted by utilitarianism” I disagree. Utilitarian arguments get selectively invoked and special exceptions made in response to typical moral sentiments, political alignments, personal and tribal loyalties, and so forth.
I would say similar things about religious accounts of morality. Many people claim to buy Christian or Muslim or Buddhist ethics, but the explanatory power coming from these, as opposed to other cultural, local, and personal factors, seems limited.
If all you mean by “sincere” is not explicitly thinking of something as deceptive, that seems right to me, but if “sincere” is supposed to mean “thoughts and actions can be well-predicted by utilitarianism” I disagree.
I was focused more on the first meaning of “sincere.” I think that utilitarian’s abstract “far mode” ethical beliefs and thoughts are generally fairly well predicted by utilitarianism, but their “near mode” behaviors are not. I think that self-deception and akrasia are the main reasons there is such dissonance between their beliefs and behavior.
I think a good analogy is belief in probability theory. I believe that doing probability calculations, and paying attention to the calculations of others, is the best way to determine the likelihood of something. Sometimes my behavior reflects this, I don’t buy lottery tickets for instance. But other times it does not. For example, I behave more cautiously when I’m out walking if I have recently read a vivid description of a crime, even if said crime occurred decades ago, or is fictional. I worry more about diseases with creepy symptoms than I do about heart disease. But I think I do sincerely “believe” in probability theory in some sense, even though it doesn’t always affect my behavior.
One of the most effective ways to signal you are altruistic and trustworthy is to actually be altruistic and trustworthy.
I agree with the main thrust of the argument, but such signaling would only apply to potential trading partners to whom you make a habit of speaking openly and honestly about your motives, or who are unusually clever and perceptive, or both.
If you want a plausabile theory as to how natural selection could produce sincere altruism, look at it from a game-theoretic perspective. People who could plausibly signal altruism and trustworthiness would get huge evolutionary gains because they could attract trading partners more easily. One of the more effective ways to signal that you possess a trait is to actually possess it. One of the most effective ways to signal you are altruistic and trustworthy is to actually be altruistic and trustworthy. So it’s plausible that humans evolved to be genuinely nice, trustworthy, and altruistic, probably because the evolutionary gains from getting trade partners to trust them outweighed the evolutionary losses from sacrificing for others.
Altruism—at least in biology—normally means taking an inclusive fitness hit for the sake of others—e.g. see the definition of Trivers (1971), which reads:
Altruistic behavior can be defined as behavior that benefits another organism, not closely related, while being apparently detrimental to the organism performing the behavior, benefit and detriment being defined in terms of contribution to inclusive fitness
Proposing that altruism benefits the donor just means that you aren’t talking about genuine altruism at all, but “fake” altruism—i.e. genetic selfishness going a fancy name. Such “fake” altruism is easy to explain. The puzzle in biology is to do with genuine altruism.
the way you have written this makes it sound like you think utilitarians are cynically pretending to believe in utilitarianism to look good to others, but don’t really believe it in their heart of hearts. I don’t think this is true in most cases, I think utilitarians are usually sincere, and most failures to live up to their beliefs can be explained by akrasia.
So: I am most interested in explaining behaviour. In this case, I think virtue signalling is pretty clearly the best fit. You are talking about conscious motives. These are challenging to investigate experimentally. You can ask people—but self-reporting is notoriously unreliable. Speculations about conscious motives are less interesting to me.
Altruism—at least in biology—normally means taking an inclusive fitness hit for the sake of others—e.g. see the definition of Trivers (1971).
I thought it fairly obvious I was not using the biological definition of altruism. I was using the ethical definition of altruism—taking a self-interest hit for the sake of others’ self interest. It’s quite possible for something to increase your inclusive fitness while harming your self-interest, unplanned pregnancy, for instance.
Proposing that altruism benefits the donor just means that you aren’t talking about genuine altruism at all, but “fake” altruism—i.e. genetic selfishness going a fancy name.
I wasn’t proposing that altruism benefited the donor. I was proposing that it benefited the donor’s genes. That doesn’t mean that it is “fake altruism,” however, because self interest and genetic interest are not the same thing. Self interest refers to the things a person cares about and wants to accomplish, i.e. happiness, pleasure, achievement, love, fun, it doesn’t have anything to do with genes.
Essentially, what you have argued is:
Genuinely caring about other people might cause you to behave in ways that make your genes replicate more frequently.
Therefore, you don’t really care about other people, you care about your genes.
If I understand your argument correctly it seems like you are committing some kind of reverse anthropomorphism. Instead of ascribing human goals and feelings to nonsentient objects, you are ascribing the metaphorical evolutionary “goals” of nonsentient objects (genes) to the human mind. That isn’t right. Humans don’t consciously or unconsciously directly act to increase our IGF, we simply engage in behaviors for their own sake that happened to increase our IGF in the ancestral environment.
Altruism—at least in biology—normally means taking an inclusive fitness hit for the sake of others—e.g. see the definition of Trivers (1971).
I thought it fairly obvious I was not using the biological definition of altruism. I was using the ethical definition of altruism—taking a self-interest hit for the sake of others’ self interest. It’s quite possible for something to increase your inclusive fitness while harming your self-interest, unplanned pregnancy, for instance.
So: I am talking about science, while you are talking about moral philosophy. Now that we have got that out the way, there should be no misunderstanding—though in the rest of your post you seem keen to manufacture one.
So: I am talking about science, while you are talking about moral philosophy.
I was talking about both. My basic point was that the reason humans evolved to care about morality and moral philosophy in the first place was because doing so made them very trustworthy, which enhanced their IGF by making it easier to obtain allies.
My original reply was a request for you to clarify whether you meant that utilitarians are cynically pretending to care about utilitarianism in order to signal niceness, or whether you meant that humans evolved to care about niceness directly and care about utilitarianism because it is exceptionally nice (a “niceness superstimulus” in your words). I wasn’t sure which you meant. It’s important to make this clear when discussing signalling because otherwise you risk accusing people of being cynical manipulators when you don’t really mean to.
I don’t know if you mean to come across this way, but the way you have written this makes it sound like you think utilitarians are cynically pretending to believe in utilitarianism to look good to others, but don’t really believe it in their heart of hearts. I don’t think this is true in most cases, I think utilitarians are usually sincere, and most failures to live up to their beliefs can be explained by akrasia.
If you want a plausabile theory as to how natural selection could produce sincere altruism, look at it from a game-theoretic perspective. People who could plausibly signal altruism and trustworthiness would get huge evolutionary gains because they could attract trading partners more easily. One of the more effective ways to signal that you possess a trait is to actually possess it. One of the most effective ways to signal you are altruistic and trustworthy is to actually be altruistic and trustworthy. So it’s plausible that humans evolved to be genuinely nice, trustworthy, and altruistic, probably because the evolutionary gains from getting trade partners to trust them outweighed the evolutionary losses from sacrificing for others. Akrasia can be seen as an evolved mechanism that sabotages our altruism in an ego-dystonic way, so that we can truthfully say we’re altruists without making maladaptive sacrifices for others.
Of course, the fact that our altruistic tendencies may have evolved from genetically selfish reasons gives us zero reason to behave in a selfish fashion today, except possibly as a means to prevent natural selection from removing altruism from existence. We are not our genes.
If all you mean by “sincere” is not explicitly thinking of something as deceptive, that seems right to me, but if “sincere” is supposed to mean “thoughts and actions can be well-predicted by utilitarianism” I disagree. Utilitarian arguments get selectively invoked and special exceptions made in response to typical moral sentiments, political alignments, personal and tribal loyalties, and so forth.
I would say similar things about religious accounts of morality. Many people claim to buy Christian or Muslim or Buddhist ethics, but the explanatory power coming from these, as opposed to other cultural, local, and personal factors, seems limited.
I was focused more on the first meaning of “sincere.” I think that utilitarian’s abstract “far mode” ethical beliefs and thoughts are generally fairly well predicted by utilitarianism, but their “near mode” behaviors are not. I think that self-deception and akrasia are the main reasons there is such dissonance between their beliefs and behavior.
I think a good analogy is belief in probability theory. I believe that doing probability calculations, and paying attention to the calculations of others, is the best way to determine the likelihood of something. Sometimes my behavior reflects this, I don’t buy lottery tickets for instance. But other times it does not. For example, I behave more cautiously when I’m out walking if I have recently read a vivid description of a crime, even if said crime occurred decades ago, or is fictional. I worry more about diseases with creepy symptoms than I do about heart disease. But I think I do sincerely “believe” in probability theory in some sense, even though it doesn’t always affect my behavior.
I agree with the main thrust of the argument, but such signaling would only apply to potential trading partners to whom you make a habit of speaking openly and honestly about your motives, or who are unusually clever and perceptive, or both.
Altruism—at least in biology—normally means taking an inclusive fitness hit for the sake of others—e.g. see the definition of Trivers (1971), which reads:
Proposing that altruism benefits the donor just means that you aren’t talking about genuine altruism at all, but “fake” altruism—i.e. genetic selfishness going a fancy name. Such “fake” altruism is easy to explain. The puzzle in biology is to do with genuine altruism.
So: I am most interested in explaining behaviour. In this case, I think virtue signalling is pretty clearly the best fit. You are talking about conscious motives. These are challenging to investigate experimentally. You can ask people—but self-reporting is notoriously unreliable. Speculations about conscious motives are less interesting to me.
I thought it fairly obvious I was not using the biological definition of altruism. I was using the ethical definition of altruism—taking a self-interest hit for the sake of others’ self interest. It’s quite possible for something to increase your inclusive fitness while harming your self-interest, unplanned pregnancy, for instance.
I wasn’t proposing that altruism benefited the donor. I was proposing that it benefited the donor’s genes. That doesn’t mean that it is “fake altruism,” however, because self interest and genetic interest are not the same thing. Self interest refers to the things a person cares about and wants to accomplish, i.e. happiness, pleasure, achievement, love, fun, it doesn’t have anything to do with genes.
Essentially, what you have argued is:
Genuinely caring about other people might cause you to behave in ways that make your genes replicate more frequently.
Therefore, you don’t really care about other people, you care about your genes.
If I understand your argument correctly it seems like you are committing some kind of reverse anthropomorphism. Instead of ascribing human goals and feelings to nonsentient objects, you are ascribing the metaphorical evolutionary “goals” of nonsentient objects (genes) to the human mind. That isn’t right. Humans don’t consciously or unconsciously directly act to increase our IGF, we simply engage in behaviors for their own sake that happened to increase our IGF in the ancestral environment.
Relevant
So: I am talking about science, while you are talking about moral philosophy. Now that we have got that out the way, there should be no misunderstanding—though in the rest of your post you seem keen to manufacture one.
I was talking about both. My basic point was that the reason humans evolved to care about morality and moral philosophy in the first place was because doing so made them very trustworthy, which enhanced their IGF by making it easier to obtain allies.
My original reply was a request for you to clarify whether you meant that utilitarians are cynically pretending to care about utilitarianism in order to signal niceness, or whether you meant that humans evolved to care about niceness directly and care about utilitarianism because it is exceptionally nice (a “niceness superstimulus” in your words). I wasn’t sure which you meant. It’s important to make this clear when discussing signalling because otherwise you risk accusing people of being cynical manipulators when you don’t really mean to.