Of course there’s a causal relation which explains the causal fact of this reference, but this causal explanation is not the same as the moral justification, and it’s not appealed to as the moral justification
Of course it isn’t, because we’re doing meta-ethics here, and don’t yet have access to the notion of “moral justification”; we’re in the process of deciding which kinds of things will be used as “moral justification”.
It’s your metamorality that is human-dependent, not your morality; see my other comment.
Now I’m confused. I don’t understand how you can have preferences that you use to decide what ought to count as a “moral justification” without already having a moral reference frame.
Since we don’t have conscious access to our premises, and we haven’t finished reflecting on them, we sometimes go around studying our own conclusions in an effort to discover what counts as a moral justification, but that’s not like a philosopher of pure emptiness constructing justificationness from scratch and appeal to some mysterious higher criterion. (Bearing in mind that when someone offers me a higher criterion, it usually ends up looking pretty uninteresting.)
I don’t understand how you can have preferences that you use to decide what ought to count as a “moral justification” without already having a moral reference frame.
Well, consider an analogy from mathematical logic: when you write out a formal proof that 2+2 = 4, at some point in the process, you’ll end up concatenating two symbols here and two symbols there to produce four symbols; but this doesn’t mean you’re appealing to the conclusion you’re trying to prove in your proof; it just so happens that your ability to produce the proof depends on the truth of the proposition.
Similarly, when an AI with Morality programmed into it computes the correct action, it just follows the Morality algorithm directly, which doesn’t necessarily refer explicitly to “humans” as such. But human programmers had to program the Morality algorithm into the AI in the first place; and the reason they did so is because they themselves were running something related to the Morality algorithm in their own brains. That, as you know, doesn’t imply that the AI itself is appealing to “human values” in its actual computation (the Morality program need not make such a reference); but it does imply that the meta-ethical theory used by the programmers compelled them to (in an appropriate sense) look at their own brains to decide what to program into the AI.
Of course it isn’t, because we’re doing meta-ethics here, and don’t yet have access to the notion of “moral justification”; we’re in the process of deciding which kinds of things will be used as “moral justification”.
It’s your metamorality that is human-dependent, not your morality; see my other comment.
Now I’m confused. I don’t understand how you can have preferences that you use to decide what ought to count as a “moral justification” without already having a moral reference frame.
Since we don’t have conscious access to our premises, and we haven’t finished reflecting on them, we sometimes go around studying our own conclusions in an effort to discover what counts as a moral justification, but that’s not like a philosopher of pure emptiness constructing justificationness from scratch and appeal to some mysterious higher criterion. (Bearing in mind that when someone offers me a higher criterion, it usually ends up looking pretty uninteresting.)
Well, consider an analogy from mathematical logic: when you write out a formal proof that 2+2 = 4, at some point in the process, you’ll end up concatenating two symbols here and two symbols there to produce four symbols; but this doesn’t mean you’re appealing to the conclusion you’re trying to prove in your proof; it just so happens that your ability to produce the proof depends on the truth of the proposition.
Similarly, when an AI with Morality programmed into it computes the correct action, it just follows the Morality algorithm directly, which doesn’t necessarily refer explicitly to “humans” as such. But human programmers had to program the Morality algorithm into the AI in the first place; and the reason they did so is because they themselves were running something related to the Morality algorithm in their own brains. That, as you know, doesn’t imply that the AI itself is appealing to “human values” in its actual computation (the Morality program need not make such a reference); but it does imply that the meta-ethical theory used by the programmers compelled them to (in an appropriate sense) look at their own brains to decide what to program into the AI.
That would be epistemic preferences. It’s epistemology (and allied fields, like logic and rationality) thatreally runs into circularity problems.