I just came up with this name for the thing I think I am seeing here—it’s artificial morality. It is when you feel some things are moral and some are not, then you come up with a theory on why some things are moral and others are not, then you apply that theory to come up with other things that should feel moral/immoral and then you try to impose these should feelings to others even though there might not be a single person on earth who actaully feels that.
I both resonate with this sentiment, but am also hesitant since you could say similar things about linear algebra or prime factorizations, or most of mathematics:
You first come up with a theory of how to determine something is a prime number, based on the ones you know are primes, then you apply that theory to some numbers you intuitively thought were not primes to show that they are indeed prime, and then you impose that mathematical knowledge on others, even though there might currently not be a single person on earth who actually thinks the number you highlight is prime.
Or maybe a more historically accurate example is non-euclidian geometry, which if I remember things correctly, was assumed to be inconsistent since the 16th century, and a lot of the people who developed non-euclidian geometry actually set out to prove its inconsistency. But based on the methods they applied to other mathematical theorems, they then applied those methods to non-euclidian geometry and found that it should actually be consistent, and then they imposed that feeling of shouldness onto others, even though at the time the dominant mode of thinking was to believe non-euclidian geometry was inconsistent.
This is not an accurate comparison, for the simple reason that “prime number” is a formally defined concept. The reason we think that 2 or 5 or 13 are prime isn’t that we have an un-formalized (and perhaps un-formalizable) intuition that they’re prime; it’s that we have a formal definition, and 2 and 5 and 13 fit it!
So when we consider a number like 2,345,387,436,980,981, our “intuitions” about whether it’s prime, or whether anyone “thinks” it’s prime, are just as irrelevant as they are to the question of whether 2 is prime. Either a number fits the formal definition, or it doesn’t fit the formal definition, or we are as yet unable to determine whether it fits the formal definition. Nothing else matters.
With moral intuitions, obviously, things could not be more different…
I think you are overestimating the degree to which we have formal definitions for core mathematical concepts, or at least to what degree it was possible to make progress before we had formalized a large chunk of modern mathematics.
While I agree that morality is generally harder to formalize than mathematics, I do think we are only talking about a difference in degree, instead of a difference in kind. The study of mathematics is the study of our intuitions about certain types of relationships between mental object we have in our mind (which are probably informed by our real-world experience). We tend to develop mathematics in the areas where peoples intuitions about their mental objects agree with one another, or where we can reliably induce similar intuitions with the use of thought experiments or examples (i.e. counting apples, number lines, falling objects, linear transformations, dividing pies between friends, etc.).
The study of morality is similarly the study of a different set of relationships, which might be less universal, but not qualitatively differently universal than our intuitions about mathematical relationships. Good moral philosophy similarly tries to find out what moral intuitions people share, or induce shared intuitions with the help of examples and thought experiments, and then tries to apply the standards of consistency (which is just another aesthetic intuition), logical argument (also just based on aesthetic intuitions) and conceptual elegance, to extend their domain, similarly to mathematicians extending our intuitions about dividing pies to the concepts of the rational and real numbers.
Edit: A related point is that a proof in mathematics is just the application of a set of rules that seem self-evidently true to other mathematicians. If you for some reason you do not find the principle of induction, or the concept of proof-by-contradiction intuitively compelling, then those proofs will not be compelling to you. Mathematics is just built on our intuitions of how logical reasoning is supposed to work. Good moral ethics is trying to establish the foundations of our intuitions of how moral reasoning is supposed to work, and then apply those foundations to come to a deeper understanding of morality, similarly to how mathematics applied its foundation to come to a much deeper understanding of what logical truth is.
I disagree with your evaluation of both mathematics and morality, but it seems like we’ve wandered into somewhat of a tangent. I think I prefer to table this discussion until another time, with apologies.
Indeed. This, essentially, describes utilitarianism as a whole, which one can summarize thus:
Step 1: Notice a certain moral intuition (roughly—that it’s better when people’s lives are good than when they are bad; and it’s better when good things happen to more people, than to fewer).
Step 2: Taking this moral intuition as an axiom, extrapolate it into an entire, self-consistent moral system, which addresses all possible questions of moral action.
Step 3: Notice that one has other moral intuitions, and that some of them conflict with the dictates of the constructed system.
Step 4: Dismiss these other moral intuitions as invalid, on the grounds of their conflict with the constructed system.
Bonus Step: Conveniently forget that the whole edifice began with a moral intuition in the first place (and how otherwise—what else was there for it to have begun from?).
How do you mean? I agree that it’s an error mode, but… what I described isn’t (as far as I can tell) “utilitarianism gone wrong”; it’s just what utilitarianism is, period. (That is, I certainly don’t think that what I was doing constitutes anything like “tarring all utilitarians by association with the mistaken ones”! It truly seems to me that utilitarianism, at its core, consists entirely[1] of the exact thing I described.)
[1] No doubt there are exceptions, as all moral theories, especially popular and much-discussed ones like utilitarianism, have esoteric variants. But if we consider the (generously defined) central cluster of utilitarian views, I stand by my comments.
Hmm, we might have different experiences of how the word utilitarianism is used in ethics. While your definition is adjacent to how I see it used, it is missing an important subset of moral views that I see as quite central to the term. As an example of this, see Sam Harris’ Moral Landscape, which argues for utilitarianism, but for a version that seems to not align with your critique/definition.
But arguing over definitions is a lot less exciting, and I think we both agree that this is a common error mode in ethics. So let’s maybe table this for now.
I just came up with this name for the thing I think I am seeing here—it’s artificial morality. It is when you feel some things are moral and some are not, then you come up with a theory on why some things are moral and others are not, then you apply that theory to come up with other things that
should
feel moral/immoral and then you try to impose theseshould
feelings to others even though there might not be a single person on earth who actaully feels that.I both resonate with this sentiment, but am also hesitant since you could say similar things about linear algebra or prime factorizations, or most of mathematics:
You first come up with a theory of how to determine something is a prime number, based on the ones you know are primes, then you apply that theory to some numbers you intuitively thought were not primes to show that they are indeed prime, and then you impose that mathematical knowledge on others, even though there might currently not be a single person on earth who actually thinks the number you highlight is prime.
Or maybe a more historically accurate example is non-euclidian geometry, which if I remember things correctly, was assumed to be inconsistent since the 16th century, and a lot of the people who developed non-euclidian geometry actually set out to prove its inconsistency. But based on the methods they applied to other mathematical theorems, they then applied those methods to non-euclidian geometry and found that it should actually be consistent, and then they imposed that feeling of shouldness onto others, even though at the time the dominant mode of thinking was to believe non-euclidian geometry was inconsistent.
This is not an accurate comparison, for the simple reason that “prime number” is a formally defined concept. The reason we think that 2 or 5 or 13 are prime isn’t that we have an un-formalized (and perhaps un-formalizable) intuition that they’re prime; it’s that we have a formal definition, and 2 and 5 and 13 fit it!
So when we consider a number like 2,345,387,436,980,981, our “intuitions” about whether it’s prime, or whether anyone “thinks” it’s prime, are just as irrelevant as they are to the question of whether 2 is prime. Either a number fits the formal definition, or it doesn’t fit the formal definition, or we are as yet unable to determine whether it fits the formal definition. Nothing else matters.
With moral intuitions, obviously, things could not be more different…
I think you are overestimating the degree to which we have formal definitions for core mathematical concepts, or at least to what degree it was possible to make progress before we had formalized a large chunk of modern mathematics.
While I agree that morality is generally harder to formalize than mathematics, I do think we are only talking about a difference in degree, instead of a difference in kind. The study of mathematics is the study of our intuitions about certain types of relationships between mental object we have in our mind (which are probably informed by our real-world experience). We tend to develop mathematics in the areas where peoples intuitions about their mental objects agree with one another, or where we can reliably induce similar intuitions with the use of thought experiments or examples (i.e. counting apples, number lines, falling objects, linear transformations, dividing pies between friends, etc.).
The study of morality is similarly the study of a different set of relationships, which might be less universal, but not qualitatively differently universal than our intuitions about mathematical relationships. Good moral philosophy similarly tries to find out what moral intuitions people share, or induce shared intuitions with the help of examples and thought experiments, and then tries to apply the standards of consistency (which is just another aesthetic intuition), logical argument (also just based on aesthetic intuitions) and conceptual elegance, to extend their domain, similarly to mathematicians extending our intuitions about dividing pies to the concepts of the rational and real numbers.
Edit: A related point is that a proof in mathematics is just the application of a set of rules that seem self-evidently true to other mathematicians. If you for some reason you do not find the principle of induction, or the concept of proof-by-contradiction intuitively compelling, then those proofs will not be compelling to you. Mathematics is just built on our intuitions of how logical reasoning is supposed to work. Good moral ethics is trying to establish the foundations of our intuitions of how moral reasoning is supposed to work, and then apply those foundations to come to a deeper understanding of morality, similarly to how mathematics applied its foundation to come to a much deeper understanding of what logical truth is.
I disagree with your evaluation of both mathematics and morality, but it seems like we’ve wandered into somewhat of a tangent. I think I prefer to table this discussion until another time, with apologies.
Seems good. It does seem pretty removed from the OP.
Indeed. This, essentially, describes utilitarianism as a whole, which one can summarize thus:
Step 1: Notice a certain moral intuition (roughly—that it’s better when people’s lives are good than when they are bad; and it’s better when good things happen to more people, than to fewer).
Step 2: Taking this moral intuition as an axiom, extrapolate it into an entire, self-consistent moral system, which addresses all possible questions of moral action.
Step 3: Notice that one has other moral intuitions, and that some of them conflict with the dictates of the constructed system.
Step 4: Dismiss these other moral intuitions as invalid, on the grounds of their conflict with the constructed system.
Bonus Step: Conveniently forget that the whole edifice began with a moral intuition in the first place (and how otherwise—what else was there for it to have begun from?).
While I agree that this is a common error mode in moral ethics, saying that this “describes utilitarianism as a whole” strikes me as a strawman.
How do you mean? I agree that it’s an error mode, but… what I described isn’t (as far as I can tell) “utilitarianism gone wrong”; it’s just what utilitarianism is, period. (That is, I certainly don’t think that what I was doing constitutes anything like “tarring all utilitarians by association with the mistaken ones”! It truly seems to me that utilitarianism, at its core, consists entirely[1] of the exact thing I described.)
[1] No doubt there are exceptions, as all moral theories, especially popular and much-discussed ones like utilitarianism, have esoteric variants. But if we consider the (generously defined) central cluster of utilitarian views, I stand by my comments.
Hmm, we might have different experiences of how the word utilitarianism is used in ethics. While your definition is adjacent to how I see it used, it is missing an important subset of moral views that I see as quite central to the term. As an example of this, see Sam Harris’ Moral Landscape, which argues for utilitarianism, but for a version that seems to not align with your critique/definition.
But arguing over definitions is a lot less exciting, and I think we both agree that this is a common error mode in ethics. So let’s maybe table this for now.