I generally explain my interest in doing good and considering ethics (despite being anti realist) something like your point 5, and I don’t agree with or fully get your refutation that it’s not a good explanation, so I’ll engage with it and hope for clarifications.
My position, despite anti-realism and moral relativism, is I do happen to have values (which I can “personal values”, they’re mine and I don’t think there’s an absolute reason for anyone else to have them, though I will advocate for them to some extent) and epistemics (despite the problem of the criterion) that have initialized in a space where I want to do Good, I want to know what is Good, I want to iterate at improving my understanding and actions doing Good.
A quick question—when you say “Personally, though, I collect stamps”, do you mean your personal approach to ethics is descriptive and exploratory (and you’re collecting stamps in the sense of physics vs stamp collection image), and that you don’t identify as systematizer ?
I wouldn’t identify as “systematizer for its sake” either, it’s not a terminal value, but it’s an instrumental value for achieving my goal of doing Good. I happen to have priors and heuristics saying I can do more Good by systematizing better so I do, and I get positive feedback from it so I continue this. Re “conspicuous absence of subject-matter”—true for an anti realist considering “absolute ethics”, but this doesn’t stop an anti realist considering what they’ll call “my ethics”. There can be as much subject-matter there as in realist absolute ethics, because you can simulate absolute ethics in “my ethics” with : “I epistemically believe there is no true absolute ethics, but my personal ethics is that I should adopt what I imagine would be the absolute real ethics if it existed”. I assume this is an existing theorized position but not knowing if it already has another standard name, I call this being a “quasi realist”, which is how I’d describe myself currently.
I don’t buy Anti realists treating consistency as absolute, so there’s nothing to explain. I view valuing consistency as being instrumental and it happens to win all the time (every ethics has it) because of the math that you can’t rule out anything otherwise. I think the person who answers “eh, I don’t care that much about being ethically consistent” is correct that it’s not in their terminal values, but miscalculates (they actually should value it instrumentally), it’s a good mistake to point out. I agree that someone who tries to justify their intransitivities by saying “oh I’m inconsistent” is throwing out the baby with the bathwater when they could simply say “I’m deciding to be intransitive here because it better fits my points”. Again, it’s a good mistake to point out. I see anti realists as just picking up consistency because it’s a good property to have for useful ethics, not because “Ethics” forced it onto them (it couldn’t, it doesn’t exist).
On the final paragraph, I would write my position as : “I do ethics, as an anti-realist, because I have a brute, personal preference to Doing Good (a cluster of helping other people, reducing suffering, anything that stems from Veil of Uncertainty which is intuitively appealing), and that this is self reinforcing (I consider it Good to want to do Good and to improve and doing Good), so I want to improve my ethics. There exists zones of value space where I’m in the dark and have no intuition (eg. population ethics/repugnant conclusion) so I use good properties (consistency, ..) to craft a curve which extends my ethics, not because of personal preference for blah-structural-properties, but by belief that this will satisfy my preferences to Doing Good the best”. If a dilemma comes up pitting object level stakes and some abstract structural constraint, I weigh my belief that my intuition on “is this Good” is correct against my belief that “the model of ethics I constructed from other points is correct” and I’ll probably update one or both. Because of the problem of the criterion, I’m neither gonna trust my ethics or my data points as absolute. I have uncertainty on the position of all my points and on the best shape of the curve, so sometimes I move my estimate of the point position because it fits the curve better, and sometimes I move the curve shape because I’m pretty sure the point should be there.
I hope that’s a fully satisfying answer to “Why do ethical anti-realists do ethics”. I wouldn’t say there’s an absolute reason why ethical anti-realists should do ethics.
I generally explain my interest in doing good and considering ethics (despite being anti realist) something like your point 5, and I don’t agree with or fully get your refutation that it’s not a good explanation, so I’ll engage with it and hope for clarifications.
My position, despite anti-realism and moral relativism, is I do happen to have values (which I can “personal values”, they’re mine and I don’t think there’s an absolute reason for anyone else to have them, though I will advocate for them to some extent) and epistemics (despite the problem of the criterion) that have initialized in a space where I want to do Good, I want to know what is Good, I want to iterate at improving my understanding and actions doing Good.
A quick question—when you say “Personally, though, I collect stamps”, do you mean your personal approach to ethics is descriptive and exploratory (and you’re collecting stamps in the sense of physics vs stamp collection image), and that you don’t identify as systematizer ?
I wouldn’t identify as “systematizer for its sake” either, it’s not a terminal value, but it’s an instrumental value for achieving my goal of doing Good. I happen to have priors and heuristics saying I can do more Good by systematizing better so I do, and I get positive feedback from it so I continue this.
Re “conspicuous absence of subject-matter”—true for an anti realist considering “absolute ethics”, but this doesn’t stop an anti realist considering what they’ll call “my ethics”. There can be as much subject-matter there as in realist absolute ethics, because you can simulate absolute ethics in “my ethics” with : “I epistemically believe there is no true absolute ethics, but my personal ethics is that I should adopt what I imagine would be the absolute real ethics if it existed”. I assume this is an existing theorized position but not knowing if it already has another standard name, I call this being a “quasi realist”, which is how I’d describe myself currently.
I don’t buy Anti realists treating consistency as absolute, so there’s nothing to explain. I view valuing consistency as being instrumental and it happens to win all the time (every ethics has it) because of the math that you can’t rule out anything otherwise. I think the person who answers “eh, I don’t care that much about being ethically consistent” is correct that it’s not in their terminal values, but miscalculates (they actually should value it instrumentally), it’s a good mistake to point out.
I agree that someone who tries to justify their intransitivities by saying “oh I’m inconsistent” is throwing out the baby with the bathwater when they could simply say “I’m deciding to be intransitive here because it better fits my points”. Again, it’s a good mistake to point out.
I see anti realists as just picking up consistency because it’s a good property to have for useful ethics, not because “Ethics” forced it onto them (it couldn’t, it doesn’t exist).
On the final paragraph, I would write my position as : “I do ethics, as an anti-realist, because I have a brute, personal preference to Doing Good (a cluster of helping other people, reducing suffering, anything that stems from Veil of Uncertainty which is intuitively appealing), and that this is self reinforcing (I consider it Good to want to do Good and to improve and doing Good), so I want to improve my ethics. There exists zones of value space where I’m in the dark and have no intuition (eg. population ethics/repugnant conclusion) so I use good properties (consistency, ..) to craft a curve which extends my ethics, not because of personal preference for blah-structural-properties, but by belief that this will satisfy my preferences to Doing Good the best”.
If a dilemma comes up pitting object level stakes and some abstract structural constraint, I weigh my belief that my intuition on “is this Good” is correct against my belief that “the model of ethics I constructed from other points is correct” and I’ll probably update one or both. Because of the problem of the criterion, I’m neither gonna trust my ethics or my data points as absolute. I have uncertainty on the position of all my points and on the best shape of the curve, so sometimes I move my estimate of the point position because it fits the curve better, and sometimes I move the curve shape because I’m pretty sure the point should be there.
I hope that’s a fully satisfying answer to “Why do ethical anti-realists do ethics”.
I wouldn’t say there’s an absolute reason why ethical anti-realists should do ethics.