I’d like to recommend a fun little piece called the The Schizophrenia of Modern Ethical Theories (PDF), which points out that popular moral theories look very strange when actually applied as a grounds for action in real-life situations. Minimally, the author argues that certain reasons for actions are incompatible with certain motives, and that this becomes incoherent if we suppose that these motives were (at least partially) the motivation we had to adopt that set of reasons in the first place.
For example, if you tend to your sick friend, but explain to them that you are (really only) doing so on utilitarian ground, or on egoistic grounds, or because you are obligated to do so, etc, well...doesn’t that seem off? And don’t those reasons for action, presumably a generalization of a great deal of specific situations of this sort, seem incompatible with the original motivation that we felt was morally good?
For example, if you tend to your sick friend, but explain to them that you are (really only) doing so on utilitarian ground, or on egoistic grounds, or because you are obligated to do so, etc, well...doesn’t that seem off?
… no? I mean, maybe it will sound weird if you actually say it, because that’s not a norm in our culture, but apart from that, it doesn’t seem morally bad or off to me.
ETA: well, I suppose only helping someone on egoistic grounds sounds off, but the utilitarian/moral obligation motivations still seem fine to me.
I suppose only helping someone on egoistic grounds sounds off
I’m not sure even that does, when it’s put in an appropriate way. “I’m doing this because I care about you, I don’t like to see you in trouble, and I’ll be much happier once I see you sorted out.”
There are varieties of egoism that can’t honestly be expressed in such terms, and those might be harder to put in terms that make them sound moral. But I think their advocates would generally not claim to be moral in the first place.
I think Stocker (the author of the paper) is making the following mistake. Utilitarianism, for instance, says something like this:
The morally best actions are the ones that lead to maximum overall happiness.
But Stocker’s argument is against the following quite different proposition:
We should restructure our minds so that all we do is to calculate maximum overall happiness.
And one problem with this (from a utilitarian perspective) is that such a restructuring of our minds would greatly reduce their ability to experience happiness.
We have to distinguish between normative ethics and specific moral recommendations. Utilitarianism falls into the class of normative ethical theories. It tells you what constitutes a good decision given particular facts; but it does not tell you that you possess those facts, or how to acquire them, or how to optimally search for that good decision. Normative ethical theories tell you what sorts of moral reasoning are admissible and what goals are credible; they don’t give you the answers.
For instance, believing in divine command theory (that moral rules come from God’s will) does not tell you what God’s will is. It doesn’t tell you whether to follow the Holy Bible or the Guru Granth Sahib or the Liber AL vel Legis or the voices in your head.
And similarly, utilitarianism does not tell you “Sleep with your cute neighbor!” or “Don’t sleep with your cute neighbor!” The theory hasn’t pre-calculated the outcome of a particular action. Rather, it tells you, “If sleeping with your cute neighbor maximizes utility, then it is good.”
The idea that the best action we can take is to self-modify to become better utilitarian reasoners (and not, say, self-modify to be better experiencers of happiness) doesn’t seem like it follows.
It looks like we’re in violent agreement. I mention this only because it’s not clear to me whether you were intending to disagree with me; if so, then I think at least one of us has misunderstood the other.
If I tell my friend that I am visiting him on egoistic grounds, it suggests that being around him and/or promoting his well-being gives me pleasure or something like that, which doesn’t sound off—it sounds correct. I should hope that my friends enjoy spending time around me and take pleasure in my well-being.
I’d like to recommend a fun little piece called the The Schizophrenia of Modern Ethical Theories (PDF), which points out that popular moral theories look very strange when actually applied as a grounds for action in real-life situations. Minimally, the author argues that certain reasons for actions are incompatible with certain motives, and that this becomes incoherent if we suppose that these motives were (at least partially) the motivation we had to adopt that set of reasons in the first place.
For example, if you tend to your sick friend, but explain to them that you are (really only) doing so on utilitarian ground, or on egoistic grounds, or because you are obligated to do so, etc, well...doesn’t that seem off? And don’t those reasons for action, presumably a generalization of a great deal of specific situations of this sort, seem incompatible with the original motivation that we felt was morally good?
… no? I mean, maybe it will sound weird if you actually say it, because that’s not a norm in our culture, but apart from that, it doesn’t seem morally bad or off to me.
ETA: well, I suppose only helping someone on egoistic grounds sounds off, but the utilitarian/moral obligation motivations still seem fine to me.
I’m not sure even that does, when it’s put in an appropriate way. “I’m doing this because I care about you, I don’t like to see you in trouble, and I’ll be much happier once I see you sorted out.”
There are varieties of egoism that can’t honestly be expressed in such terms, and those might be harder to put in terms that make them sound moral. But I think their advocates would generally not claim to be moral in the first place.
I think Stocker (the author of the paper) is making the following mistake. Utilitarianism, for instance, says something like this:
The morally best actions are the ones that lead to maximum overall happiness.
But Stocker’s argument is against the following quite different proposition:
We should restructure our minds so that all we do is to calculate maximum overall happiness.
And one problem with this (from a utilitarian perspective) is that such a restructuring of our minds would greatly reduce their ability to experience happiness.
We have to distinguish between normative ethics and specific moral recommendations. Utilitarianism falls into the class of normative ethical theories. It tells you what constitutes a good decision given particular facts; but it does not tell you that you possess those facts, or how to acquire them, or how to optimally search for that good decision. Normative ethical theories tell you what sorts of moral reasoning are admissible and what goals are credible; they don’t give you the answers.
For instance, believing in divine command theory (that moral rules come from God’s will) does not tell you what God’s will is. It doesn’t tell you whether to follow the Holy Bible or the Guru Granth Sahib or the Liber AL vel Legis or the voices in your head.
And similarly, utilitarianism does not tell you “Sleep with your cute neighbor!” or “Don’t sleep with your cute neighbor!” The theory hasn’t pre-calculated the outcome of a particular action. Rather, it tells you, “If sleeping with your cute neighbor maximizes utility, then it is good.”
The idea that the best action we can take is to self-modify to become better utilitarian reasoners (and not, say, self-modify to be better experiencers of happiness) doesn’t seem like it follows.
It looks like we’re in violent agreement. I mention this only because it’s not clear to me whether you were intending to disagree with me; if so, then I think at least one of us has misunderstood the other.
No, I was intending to expand on your argument. :)
If I tell my friend that I am visiting him on egoistic grounds, it suggests that being around him and/or promoting his well-being gives me pleasure or something like that, which doesn’t sound off—it sounds correct. I should hope that my friends enjoy spending time around me and take pleasure in my well-being.