It’s confusing how the term “realism” is used when applied to ethics, which I think obfuscates a natural position relevant to alignment. Realism about mathematical objects (small-p platonism?) doesn’t say that there is One True Mathematical Object to be discovered by all civilizations, instead there are many objects that exist, governing truth of propositions about them. When discussing a mathematical question, we first produce references to some objects, in order to locate them, and then build models of situations that involve them, to understand how these objects work, to formulate claims that are true about them. The references depend on the mathematician’s choice of topic or problems, while truth of models, given objects, doesn’t depend on the references and hence on the mathematician. The dependence involves two steps: first, there are references, which reside in the mathematician, then there are mathematical objects singled out using the references, and finally there are models that again reside in the mathematician, determined by the objects. Even though the models depend on the references, the references are screened off by the objects, so given the objects, the models no longer depend on the references.
This straightforwardly applies to ethics, except unlike for mathematics, the references are typically vague. The resulting position is realist in the sense that it considers moral facts (objects) as real mind-independent entities governing truth of propositions about them, but the choice of moral objects to consider depends on the agent, which is usually called an anti-realist position, making it difficult to frame a realist/anti-realist narrative. Here, the results of consideration of the moral objects are not necessarily intuitively transparent, their models can be unlike the references that singled them out for consideration, and correctness of models doesn’t depend on attitude of the agent, it’s determined by the moral objects themselves, their origin in references within the agent is screened off.
This position is, according to the post’s own taxomony, the only one not discussed in the post! Here, what you should do depends on current values, yet ideal understanding need not bring values into harmony with what you should do. That is, a probable outcome is alienation from what you should do despite what you should do being determined by what you currently value.
It’s confusing how the term “realism” is used when applied to ethics, which I think obfuscates a natural position relevant to alignment. Realism about mathematical objects (small-p platonism?) doesn’t say that there is One True Mathematical Object to be discovered by all civilizations, instead there are many objects that exist, governing truth of propositions about them. When discussing a mathematical question, we first produce references to some objects, in order to locate them, and then build models of situations that involve them, to understand how these objects work, to formulate claims that are true about them. The references depend on the mathematician’s choice of topic or problems, while truth of models, given objects, doesn’t depend on the references and hence on the mathematician. The dependence involves two steps: first, there are references, which reside in the mathematician, then there are mathematical objects singled out using the references, and finally there are models that again reside in the mathematician, determined by the objects. Even though the models depend on the references, the references are screened off by the objects, so given the objects, the models no longer depend on the references.
This straightforwardly applies to ethics, except unlike for mathematics, the references are typically vague. The resulting position is realist in the sense that it considers moral facts (objects) as real mind-independent entities governing truth of propositions about them, but the choice of moral objects to consider depends on the agent, which is usually called an anti-realist position, making it difficult to frame a realist/anti-realist narrative. Here, the results of consideration of the moral objects are not necessarily intuitively transparent, their models can be unlike the references that singled them out for consideration, and correctness of models doesn’t depend on attitude of the agent, it’s determined by the moral objects themselves, their origin in references within the agent is screened off.
This position is, according to the post’s own taxomony, the only one not discussed in the post! Here, what you should do depends on current values, yet ideal understanding need not bring values into harmony with what you should do. That is, a probable outcome is alienation from what you should do despite what you should do being determined by what you currently value.