Empirical disputes tend to move from generalizations to particulars, since perception is regarded as the ultimate arbiter, and our perception is of particulars. So if two people disagree about whether oppositely charged objects attract or repel one another (a generalization), one of them might say, “Well, let’s see if this positively charged metal block attracts or repels this negatively charged block.” We rely on the fact that agents with similar perceptual systems will often agree about particular perceptions, and this is leveraged to resolve disagreement about general claims.
Moral disputes, on the other hand, tend to move in the opposite direction, from particulars to generalizations. Disputants start out disagreeing about the right thing to do in a particular circumstance, and they attempt to resolve the disagreement by appeal to general principles. In this case, we think that agents with similar biological and cultural backgrounds will tend to agree about general moral principles (“avoidable suffering is bad”, “discrimination based on irrelevant characteristics is bad”, etc.) and leverage this agreement to attempt to resolve particular disagreements. So the direction of justification is the opposite of what one would expect from the perceptual model.
This suggests to me that if there are moral truths, then our knowledge of them is probably not best explained using the perceptual model. I do agree that moral disagreements aren’t entirely like mathematical disagreements either, but I only brought up the mathematical case as an example of there being other “kinds of truth”. I didn’t intend to claim that morality and mathematics will share an epistemology. I would say that knowing moral truths is a lot more like knowing truths about, say, the rules for rational thinking.
Empirical disputes tend to move from generalizations to particulars, since perception is regarded as the ultimate arbiter, and our perception is of particulars. So if two people disagree about whether oppositely charged objects attract or repel one another (a generalization), one of them might say, “Well, let’s see if this positively charged metal block attracts or repels this negatively charged block.” We rely on the fact that agents with similar perceptual systems will often agree about particular perceptions, and this is leveraged to resolve disagreement about general claims.
Moral disputes, on the other hand, tend to move in the opposite direction, from particulars to generalizations. Disputants start out disagreeing about the right thing to do in a particular circumstance, and they attempt to resolve the disagreement by appeal to general principles. In this case, we think that agents with similar biological and cultural backgrounds will tend to agree about general moral principles (“avoidable suffering is bad”, “discrimination based on irrelevant characteristics is bad”, etc.) and leverage this agreement to attempt to resolve particular disagreements. So the direction of justification is the opposite of what one would expect from the perceptual model.
This suggests to me that if there are moral truths, then our knowledge of them is probably not best explained using the perceptual model. I do agree that moral disagreements aren’t entirely like mathematical disagreements either, but I only brought up the mathematical case as an example of there being other “kinds of truth”. I didn’t intend to claim that morality and mathematics will share an epistemology. I would say that knowing moral truths is a lot more like knowing truths about, say, the rules for rational thinking.
Nah. It’s bi-directional, in roughly equal proportions.