However, there are certainly common elements in the world’s moral systems—common in ways that are not explicable by cultural common descent.
They could be explicable by common evolutionary descent: for instance, our ethics probably evolved because it was useful to animals living in large groups or packs with social hierarchies.
If there is one such optimum, and many systems eventually find it, moral realism would have a pretty good foundation.
No, not at all. That optimum may have evolved to be useful under the conditions we live in, but that doesn’t mean it’s objectively right.
You don’t seem to be entering into the spirit of this. The idea of there being one optimum which is found from many different starting conditions is not subject to the criticism that it’s location is a function of accidents in our history.
Rather obviously—since human morality is currently in a state of progressive development—it hasn’t reached any globally optimum value yet.
Maybe I misunderstood your original comment. You seemed to be arguing that moral progress is possible based on convergence. My point was even if it does reach a globally convergent value, that doesn’t mean that value is objectively optimal, or the true morality.
In order to talk about moral “progress”, or an “optimum” value, you need to first find some objective yardstick. Convergence does not establish that such a yardstick exists.
I agree with your comment, except that there are some meaningful definitions of morality and moral progress that don’t require morality to be anything but a property of the agents who feel compelled by it, and which don’t just assume that whatever happens is progress.
(In essence, it is possible— though very difficult for human beings— to figure out what the correct extrapolation from our confused notions of morality might be, remembering that the “correct” extrapolation is itself going to be defined in terms of our current morality and aesthetics. This actually ends up going somewhere, because our moral intuitions are a crazy jumble, but our more meta-moral intuitions like non-contradiction and universality are less jumbled than our object-level intuitions.)
Well, of course you can define “objectively optimal morality” to mean whatever you want.
My point was that if there is natural evolutionary convergence, then it makes reasonable sense to define “optimal morality” as the morality of the optimal creatures. If there was a better way of behaving (in the eyes of nature), then the supposedly optimal creatures would not be very optimal.
They could be explicable by common evolutionary descent: for instance, our ethics probably evolved because it was useful to animals living in large groups or packs with social hierarchies.
No, not at all. That optimum may have evolved to be useful under the conditions we live in, but that doesn’t mean it’s objectively right.
You don’t seem to be entering into the spirit of this. The idea of there being one optimum which is found from many different starting conditions is not subject to the criticism that it’s location is a function of accidents in our history.
Rather obviously—since human morality is currently in a state of progressive development—it hasn’t reached any globally optimum value yet.
Maybe I misunderstood your original comment. You seemed to be arguing that moral progress is possible based on convergence. My point was even if it does reach a globally convergent value, that doesn’t mean that value is objectively optimal, or the true morality.
In order to talk about moral “progress”, or an “optimum” value, you need to first find some objective yardstick. Convergence does not establish that such a yardstick exists.
I agree with your comment, except that there are some meaningful definitions of morality and moral progress that don’t require morality to be anything but a property of the agents who feel compelled by it, and which don’t just assume that whatever happens is progress.
(In essence, it is possible— though very difficult for human beings— to figure out what the correct extrapolation from our confused notions of morality might be, remembering that the “correct” extrapolation is itself going to be defined in terms of our current morality and aesthetics. This actually ends up going somewhere, because our moral intuitions are a crazy jumble, but our more meta-moral intuitions like non-contradiction and universality are less jumbled than our object-level intuitions.)
Well, of course you can define “objectively optimal morality” to mean whatever you want.
My point was that if there is natural evolutionary convergence, then it makes reasonable sense to define “optimal morality” as the morality of the optimal creatures. If there was a better way of behaving (in the eyes of nature), then the supposedly optimal creatures would not be very optimal.