I lean anti-realist myself, but if you pin me down I have to remain skeptical as to the existence of moral facts due to epistemic circularity. Nonetheless I believe we can extend epistemic particularism to ethics to allow us to reason as if we had knowledge of moral facts and would go one step further and say the reason for the popularity of moral realism is actually that people are adopting moral particularism (possibly without realizing what they are doing because they are confusing making a necessary but ultimately speculative assumption for knowledge) because it’s the position that allows you to make progress given the “correctness” of skepticism.
I’ve argued previously that adopting moral particularism is probably necessary to the construction of aligned AI since otherwise we have no way to pick norms for the resolution of conflicting values the AI is trying to align to without much stronger metaphysical speculation that risks hurting us if we speculate incorrectly. I plan to explore this idea more in the future, and if you’re interested this might be an opportunity for some collaboration since I think what you describe as the value to AI alignment of assuming moral realism can be entirely had by adopting moral particularism instead without needing to wade into the realist/anti-realist debate.
I lean anti-realist myself, but if you pin me down I have to remain skeptical as to the existence of moral facts due to epistemic circularity. Nonetheless I believe we can extend epistemic particularism to ethics to allow us to reason as if we had knowledge of moral facts and would go one step further and say the reason for the popularity of moral realism is actually that people are adopting moral particularism (possibly without realizing what they are doing because they are confusing making a necessary but ultimately speculative assumption for knowledge) because it’s the position that allows you to make progress given the “correctness” of skepticism.
I’ve argued previously that adopting moral particularism is probably necessary to the construction of aligned AI since otherwise we have no way to pick norms for the resolution of conflicting values the AI is trying to align to without much stronger metaphysical speculation that risks hurting us if we speculate incorrectly. I plan to explore this idea more in the future, and if you’re interested this might be an opportunity for some collaboration since I think what you describe as the value to AI alignment of assuming moral realism can be entirely had by adopting moral particularism instead without needing to wade into the realist/anti-realist debate.