Indeed. This, essentially, describes utilitarianism as a whole, which one can summarize thus:
Step 1: Notice a certain moral intuition (roughly—that it’s better when people’s lives are good than when they are bad; and it’s better when good things happen to more people, than to fewer).
Step 2: Taking this moral intuition as an axiom, extrapolate it into an entire, self-consistent moral system, which addresses all possible questions of moral action.
Step 3: Notice that one has other moral intuitions, and that some of them conflict with the dictates of the constructed system.
Step 4: Dismiss these other moral intuitions as invalid, on the grounds of their conflict with the constructed system.
Bonus Step: Conveniently forget that the whole edifice began with a moral intuition in the first place (and how otherwise—what else was there for it to have begun from?).
How do you mean? I agree that it’s an error mode, but… what I described isn’t (as far as I can tell) “utilitarianism gone wrong”; it’s just what utilitarianism is, period. (That is, I certainly don’t think that what I was doing constitutes anything like “tarring all utilitarians by association with the mistaken ones”! It truly seems to me that utilitarianism, at its core, consists entirely[1] of the exact thing I described.)
[1] No doubt there are exceptions, as all moral theories, especially popular and much-discussed ones like utilitarianism, have esoteric variants. But if we consider the (generously defined) central cluster of utilitarian views, I stand by my comments.
Hmm, we might have different experiences of how the word utilitarianism is used in ethics. While your definition is adjacent to how I see it used, it is missing an important subset of moral views that I see as quite central to the term. As an example of this, see Sam Harris’ Moral Landscape, which argues for utilitarianism, but for a version that seems to not align with your critique/definition.
But arguing over definitions is a lot less exciting, and I think we both agree that this is a common error mode in ethics. So let’s maybe table this for now.
Indeed. This, essentially, describes utilitarianism as a whole, which one can summarize thus:
Step 1: Notice a certain moral intuition (roughly—that it’s better when people’s lives are good than when they are bad; and it’s better when good things happen to more people, than to fewer).
Step 2: Taking this moral intuition as an axiom, extrapolate it into an entire, self-consistent moral system, which addresses all possible questions of moral action.
Step 3: Notice that one has other moral intuitions, and that some of them conflict with the dictates of the constructed system.
Step 4: Dismiss these other moral intuitions as invalid, on the grounds of their conflict with the constructed system.
Bonus Step: Conveniently forget that the whole edifice began with a moral intuition in the first place (and how otherwise—what else was there for it to have begun from?).
While I agree that this is a common error mode in moral ethics, saying that this “describes utilitarianism as a whole” strikes me as a strawman.
How do you mean? I agree that it’s an error mode, but… what I described isn’t (as far as I can tell) “utilitarianism gone wrong”; it’s just what utilitarianism is, period. (That is, I certainly don’t think that what I was doing constitutes anything like “tarring all utilitarians by association with the mistaken ones”! It truly seems to me that utilitarianism, at its core, consists entirely[1] of the exact thing I described.)
[1] No doubt there are exceptions, as all moral theories, especially popular and much-discussed ones like utilitarianism, have esoteric variants. But if we consider the (generously defined) central cluster of utilitarian views, I stand by my comments.
Hmm, we might have different experiences of how the word utilitarianism is used in ethics. While your definition is adjacent to how I see it used, it is missing an important subset of moral views that I see as quite central to the term. As an example of this, see Sam Harris’ Moral Landscape, which argues for utilitarianism, but for a version that seems to not align with your critique/definition.
But arguing over definitions is a lot less exciting, and I think we both agree that this is a common error mode in ethics. So let’s maybe table this for now.