We have to distinguish between normative ethics and specific moral recommendations. Utilitarianism falls into the class of normative ethical theories. It tells you what constitutes a good decision given particular facts; but it does not tell you that you possess those facts, or how to acquire them, or how to optimally search for that good decision. Normative ethical theories tell you what sorts of moral reasoning are admissible and what goals are credible; they don’t give you the answers.
For instance, believing in divine command theory (that moral rules come from God’s will) does not tell you what God’s will is. It doesn’t tell you whether to follow the Holy Bible or the Guru Granth Sahib or the Liber AL vel Legis or the voices in your head.
And similarly, utilitarianism does not tell you “Sleep with your cute neighbor!” or “Don’t sleep with your cute neighbor!” The theory hasn’t pre-calculated the outcome of a particular action. Rather, it tells you, “If sleeping with your cute neighbor maximizes utility, then it is good.”
The idea that the best action we can take is to self-modify to become better utilitarian reasoners (and not, say, self-modify to be better experiencers of happiness) doesn’t seem like it follows.
It looks like we’re in violent agreement. I mention this only because it’s not clear to me whether you were intending to disagree with me; if so, then I think at least one of us has misunderstood the other.
We have to distinguish between normative ethics and specific moral recommendations. Utilitarianism falls into the class of normative ethical theories. It tells you what constitutes a good decision given particular facts; but it does not tell you that you possess those facts, or how to acquire them, or how to optimally search for that good decision. Normative ethical theories tell you what sorts of moral reasoning are admissible and what goals are credible; they don’t give you the answers.
For instance, believing in divine command theory (that moral rules come from God’s will) does not tell you what God’s will is. It doesn’t tell you whether to follow the Holy Bible or the Guru Granth Sahib or the Liber AL vel Legis or the voices in your head.
And similarly, utilitarianism does not tell you “Sleep with your cute neighbor!” or “Don’t sleep with your cute neighbor!” The theory hasn’t pre-calculated the outcome of a particular action. Rather, it tells you, “If sleeping with your cute neighbor maximizes utility, then it is good.”
The idea that the best action we can take is to self-modify to become better utilitarian reasoners (and not, say, self-modify to be better experiencers of happiness) doesn’t seem like it follows.
It looks like we’re in violent agreement. I mention this only because it’s not clear to me whether you were intending to disagree with me; if so, then I think at least one of us has misunderstood the other.
No, I was intending to expand on your argument. :)