I don’t think the analogy to languages holds water. Substituting a word for a different word doesn’t have the kind of impact on what people do that substituting a moral rule for a different moral rule does. Put another way, there are selection pressures constraining what human moralities look like that don’t constrain what human languages look like.
Substituting a word for a different word doesn’t have the kind of impact on what people do that substituting a moral rule for a different moral rule does.
This sounds like a strawman argument to me. It doesn’t refute the argument that part of morality is cultural but based on a shared morality-learning mechanism.
there are selection pressures constraining what human moralities look like that don’t constrain what human languages look like.
There are also selection pressures constraining what human languages look like, that don’t constrain what human moralities look like. Or to give another example: there are selection pressures that constrain what dogs look like that don’t constrain what catfish look like, and vice-versa. That doesn’t mean that they also have similarities.
I don’t think the analogy to languages holds water. Substituting a word for a different word doesn’t have the kind of impact on what people do that substituting a moral rule for a different moral rule does. Put another way, there are selection pressures constraining what human moralities look like that don’t constrain what human languages look like.
This sounds like a strawman argument to me. It doesn’t refute the argument that part of morality is cultural but based on a shared morality-learning mechanism.
There are also selection pressures constraining what human languages look like, that don’t constrain what human moralities look like. Or to give another example: there are selection pressures that constrain what dogs look like that don’t constrain what catfish look like, and vice-versa. That doesn’t mean that they also have similarities.