I suppose only helping someone on egoistic grounds sounds off
I’m not sure even that does, when it’s put in an appropriate way. “I’m doing this because I care about you, I don’t like to see you in trouble, and I’ll be much happier once I see you sorted out.”
There are varieties of egoism that can’t honestly be expressed in such terms, and those might be harder to put in terms that make them sound moral. But I think their advocates would generally not claim to be moral in the first place.
I think Stocker (the author of the paper) is making the following mistake. Utilitarianism, for instance, says something like this:
The morally best actions are the ones that lead to maximum overall happiness.
But Stocker’s argument is against the following quite different proposition:
We should restructure our minds so that all we do is to calculate maximum overall happiness.
And one problem with this (from a utilitarian perspective) is that such a restructuring of our minds would greatly reduce their ability to experience happiness.
We have to distinguish between normative ethics and specific moral recommendations. Utilitarianism falls into the class of normative ethical theories. It tells you what constitutes a good decision given particular facts; but it does not tell you that you possess those facts, or how to acquire them, or how to optimally search for that good decision. Normative ethical theories tell you what sorts of moral reasoning are admissible and what goals are credible; they don’t give you the answers.
For instance, believing in divine command theory (that moral rules come from God’s will) does not tell you what God’s will is. It doesn’t tell you whether to follow the Holy Bible or the Guru Granth Sahib or the Liber AL vel Legis or the voices in your head.
And similarly, utilitarianism does not tell you “Sleep with your cute neighbor!” or “Don’t sleep with your cute neighbor!” The theory hasn’t pre-calculated the outcome of a particular action. Rather, it tells you, “If sleeping with your cute neighbor maximizes utility, then it is good.”
The idea that the best action we can take is to self-modify to become better utilitarian reasoners (and not, say, self-modify to be better experiencers of happiness) doesn’t seem like it follows.
It looks like we’re in violent agreement. I mention this only because it’s not clear to me whether you were intending to disagree with me; if so, then I think at least one of us has misunderstood the other.
I’m not sure even that does, when it’s put in an appropriate way. “I’m doing this because I care about you, I don’t like to see you in trouble, and I’ll be much happier once I see you sorted out.”
There are varieties of egoism that can’t honestly be expressed in such terms, and those might be harder to put in terms that make them sound moral. But I think their advocates would generally not claim to be moral in the first place.
I think Stocker (the author of the paper) is making the following mistake. Utilitarianism, for instance, says something like this:
The morally best actions are the ones that lead to maximum overall happiness.
But Stocker’s argument is against the following quite different proposition:
We should restructure our minds so that all we do is to calculate maximum overall happiness.
And one problem with this (from a utilitarian perspective) is that such a restructuring of our minds would greatly reduce their ability to experience happiness.
We have to distinguish between normative ethics and specific moral recommendations. Utilitarianism falls into the class of normative ethical theories. It tells you what constitutes a good decision given particular facts; but it does not tell you that you possess those facts, or how to acquire them, or how to optimally search for that good decision. Normative ethical theories tell you what sorts of moral reasoning are admissible and what goals are credible; they don’t give you the answers.
For instance, believing in divine command theory (that moral rules come from God’s will) does not tell you what God’s will is. It doesn’t tell you whether to follow the Holy Bible or the Guru Granth Sahib or the Liber AL vel Legis or the voices in your head.
And similarly, utilitarianism does not tell you “Sleep with your cute neighbor!” or “Don’t sleep with your cute neighbor!” The theory hasn’t pre-calculated the outcome of a particular action. Rather, it tells you, “If sleeping with your cute neighbor maximizes utility, then it is good.”
The idea that the best action we can take is to self-modify to become better utilitarian reasoners (and not, say, self-modify to be better experiencers of happiness) doesn’t seem like it follows.
It looks like we’re in violent agreement. I mention this only because it’s not clear to me whether you were intending to disagree with me; if so, then I think at least one of us has misunderstood the other.
No, I was intending to expand on your argument. :)