There’s a particular logical object my brain is trying to point at when it talks about morality. (At least, this is true for some fraction of my moral thinking, and true if we allow that my brain may leave some moral questions indeterminate, so that the logical object is more like a class of moralities that are all equally good by my lights.)
Just as we can do good math without committing ourselves to a view about mathematical Platonism, we can do good morality without committing ourselves to a view about whether this logical object is ‘really real’. And I can use this object as a reference point to say that, yes, there has been some moral progress relative to the past; and also some moral decay. And since value is complex, there isn’t a super simple summary I can give of the object, or of moral progress; we can talk about various examples, but the outcomes depend a lot on object-level details.
The logical object my brain is pointing at may not be exactly the same one your brain is pointing at; but there’s enough overlap among humans in general that we can usefully communicate about this ‘morality’ thing, much as it’s possible to communicate about theorems even though (e.g.) some mathematicians are intuitionists and others aren’t. We just focus on the areas of greatest overlap, or clarify which version of ‘morality’ we’re talking about as needed.
I don’t think ‘realist’ vs. ‘anti-realist’ is an important or well-defined distinction here. (Like, there are ways of defining it clearly, but the ethics literature hasn’t settled on a single consistent way to do so.)
Some better distinctions include:
Do you plan to act as though moral claim X (e.g., ‘torturing people is wrong’) is true? (E.g., trying to avoid torturing people and trying to get others not to torture people too.)
If you get an opportunity to directly modify your values, do you plan to self-modify so that you no longer act as though moral claim X is true? (Versus trying to avoid such self-modifications, or not caring.)
This is similar to the question of whether you’d endorse society changing its moral views about X.
Direct brain modifications aside, is there any information you could learn that would cause you to stop acting as though X is true? If so, what information would do the job?
This is one way of operationalizing the difference between ‘more terminal’ versus ‘more instrumental’ values: if you think torture is bad unconditionally, and plan to act accordingly, then you’re treating it more like it’s ‘terminally’ bad.
(Note that treating something as terminally bad isn’t the same thing as treating it as infinitely bad. Nor is it the same as being infinitely confident that something is bad. It just means that the thing is always a cost in your calculus.)
Insofar as you’re uncertain about which moral claims are true (/ about which moral claims you’ll behaviorally treat as though they were true, avoid self-modifying away from, etc.), what processes do you trust more or less for getting answers you’ll treat as ‘correct’?
If we think of your brain as trying to point at some logical object ‘morality’, then this reduces to asking what processes—psychological, social, etc. -- tend to better pinpoint members of the class.
I boringly agree with Holden, and with the old-hat views expressed in By Which It May Be Judged, Morality as Fixed Computation, Pluralistic Moral Reductionism, The Hidden Complexity of Wishes, and Value is Fragile.
There’s a particular logical object my brain is trying to point at when it talks about morality. (At least, this is true for some fraction of my moral thinking, and true if we allow that my brain may leave some moral questions indeterminate, so that the logical object is more like a class of moralities that are all equally good by my lights.)
Just as we can do good math without committing ourselves to a view about mathematical Platonism, we can do good morality without committing ourselves to a view about whether this logical object is ‘really real’. And I can use this object as a reference point to say that, yes, there has been some moral progress relative to the past; and also some moral decay. And since value is complex, there isn’t a super simple summary I can give of the object, or of moral progress; we can talk about various examples, but the outcomes depend a lot on object-level details.
The logical object my brain is pointing at may not be exactly the same one your brain is pointing at; but there’s enough overlap among humans in general that we can usefully communicate about this ‘morality’ thing, much as it’s possible to communicate about theorems even though (e.g.) some mathematicians are intuitionists and others aren’t. We just focus on the areas of greatest overlap, or clarify which version of ‘morality’ we’re talking about as needed.
I don’t think ‘realist’ vs. ‘anti-realist’ is an important or well-defined distinction here. (Like, there are ways of defining it clearly, but the ethics literature hasn’t settled on a single consistent way to do so.)
Some better distinctions include:
Do you plan to act as though moral claim X (e.g., ‘torturing people is wrong’) is true? (E.g., trying to avoid torturing people and trying to get others not to torture people too.)
If you get an opportunity to directly modify your values, do you plan to self-modify so that you no longer act as though moral claim X is true? (Versus trying to avoid such self-modifications, or not caring.)
This is similar to the question of whether you’d endorse society changing its moral views about X.
Direct brain modifications aside, is there any information you could learn that would cause you to stop acting as though X is true? If so, what information would do the job?
This is one way of operationalizing the difference between ‘more terminal’ versus ‘more instrumental’ values: if you think torture is bad unconditionally, and plan to act accordingly, then you’re treating it more like it’s ‘terminally’ bad.
(Note that treating something as terminally bad isn’t the same thing as treating it as infinitely bad. Nor is it the same as being infinitely confident that something is bad. It just means that the thing is always a cost in your calculus.)
Insofar as you’re uncertain about which moral claims are true (/ about which moral claims you’ll behaviorally treat as though they were true, avoid self-modifying away from, etc.), what processes do you trust more or less for getting answers you’ll treat as ‘correct’?
If we think of your brain as trying to point at some logical object ‘morality’, then this reduces to asking what processes—psychological, social, etc. -- tend to better pinpoint members of the class.
So who gets sent to jail if various people disagree about the wrongness of an action?