I don’t think ‘realist’ vs. ‘anti-realist’ is an important or well-defined distinction here. (Like, there are ways of defining it clearly, but the ethics literature hasn’t settled on a single consistent way to do so.)
Some better distinctions include:
Do you plan to act as though moral claim X (e.g., ‘torturing people is wrong’) is true? (E.g., trying to avoid torturing people and trying to get others not to torture people too.)
If you get an opportunity to directly modify your values, do you plan to self-modify so that you no longer act as though moral claim X is true? (Versus trying to avoid such self-modifications, or not caring.)
This is similar to the question of whether you’d endorse society changing its moral views about X.
Direct brain modifications aside, is there any information you could learn that would cause you to stop acting as though X is true? If so, what information would do the job?
This is one way of operationalizing the difference between ‘more terminal’ versus ‘more instrumental’ values: if you think torture is bad unconditionally, and plan to act accordingly, then you’re treating it more like it’s ‘terminally’ bad.
(Note that treating something as terminally bad isn’t the same thing as treating it as infinitely bad. Nor is it the same as being infinitely confident that something is bad. It just means that the thing is always a cost in your calculus.)
Insofar as you’re uncertain about which moral claims are true (/ about which moral claims you’ll behaviorally treat as though they were true, avoid self-modifying away from, etc.), what processes do you trust more or less for getting answers you’ll treat as ‘correct’?
If we think of your brain as trying to point at some logical object ‘morality’, then this reduces to asking what processes—psychological, social, etc. -- tend to better pinpoint members of the class.
I don’t think ‘realist’ vs. ‘anti-realist’ is an important or well-defined distinction here. (Like, there are ways of defining it clearly, but the ethics literature hasn’t settled on a single consistent way to do so.)
Some better distinctions include:
Do you plan to act as though moral claim X (e.g., ‘torturing people is wrong’) is true? (E.g., trying to avoid torturing people and trying to get others not to torture people too.)
If you get an opportunity to directly modify your values, do you plan to self-modify so that you no longer act as though moral claim X is true? (Versus trying to avoid such self-modifications, or not caring.)
This is similar to the question of whether you’d endorse society changing its moral views about X.
Direct brain modifications aside, is there any information you could learn that would cause you to stop acting as though X is true? If so, what information would do the job?
This is one way of operationalizing the difference between ‘more terminal’ versus ‘more instrumental’ values: if you think torture is bad unconditionally, and plan to act accordingly, then you’re treating it more like it’s ‘terminally’ bad.
(Note that treating something as terminally bad isn’t the same thing as treating it as infinitely bad. Nor is it the same as being infinitely confident that something is bad. It just means that the thing is always a cost in your calculus.)
Insofar as you’re uncertain about which moral claims are true (/ about which moral claims you’ll behaviorally treat as though they were true, avoid self-modifying away from, etc.), what processes do you trust more or less for getting answers you’ll treat as ‘correct’?
If we think of your brain as trying to point at some logical object ‘morality’, then this reduces to asking what processes—psychological, social, etc. -- tend to better pinpoint members of the class.