my actual moral arguments generally consist in appealing to implicit and hopefully shared reasons-for-action, not derivations from axioms
So would it be fair to say that your actual moral arguments do not consist of sufficiently careful reasoning?
these reasons-for-action both have an effective description (descriptively speaking)
Is there a difference between this claim and the claim that our actual cognition about morality can be described as an algorithm? Or are you saying that these reasons-for-action constitute (currently unknown) axioms which together form a consistent logical system?
Can you see why I might be confused? The former interpretation is too weak to distinguish morality from anything else, while the latter seems too strong given our current state of knowledge. But what else might you be saying?
any idealized or normative version of them would still have an effective description (normatively speaking).
Similar question here. Any you saying anything beyond that any idealized or normative way of thinking about morality is still an algorithm?
So would it be fair to say that your actual moral arguments do not consist of sufficiently careful reasoning?
Is there a difference between this claim and the claim that our actual cognition about morality can be described as an algorithm? Or are you saying that these reasons-for-action constitute (currently unknown) axioms which together form a consistent logical system?
Can you see why I might be confused? The former interpretation is too weak to distinguish morality from anything else, while the latter seems too strong given our current state of knowledge. But what else might you be saying?
Similar question here. Any you saying anything beyond that any idealized or normative way of thinking about morality is still an algorithm?