I have some sympathies for the idea that convergent evolution is likely to eventually result in a universal morality—rather than, say, pebble sorters and baby eaters. If true, that might be considered to be a kind of moral realism.
It is a kind of moral realism if you add in the proclamation that one ought to do now that which we all converge toward doing later. Plus you probably need some kind of argument that the limit of the convergence is pretty much independent of the starting point.
My own viewpoint on morality is closely related to this. I think that what one morally ought to do now is the same as what one prudentially and pragmatically ought to do in an ideal world in which all agents are rational, communication between agents is cheap, there are few, if any, secrets, and lifetimes are long. In such a society, a strongly enforced “social contract” will come into existence, which will have many of the characteristics of a universal morality. At least within a species. And to some degree, between species.
It is a kind of moral realism if you add in the proclamation that one ought to do now that which we all converge toward doing later.
...or if you think what we ought to be doing is helping to create the thing with the universal moral values.
I’m not really convinced that the convergence will be complete, though. If two advanced alien races meet, they probably won’t agree on all their values—perhaps due to moral spontaneous symmetry breaking—and small differences can become important.
I have some sympathies for the idea that convergent evolution is likely to eventually result in a universal morality—rather than, say, pebble sorters and baby eaters. If true, that might be considered to be a kind of moral realism.
It is a kind of moral realism if you add in the proclamation that one ought to do now that which we all converge toward doing later. Plus you probably need some kind of argument that the limit of the convergence is pretty much independent of the starting point.
My own viewpoint on morality is closely related to this. I think that what one morally ought to do now is the same as what one prudentially and pragmatically ought to do in an ideal world in which all agents are rational, communication between agents is cheap, there are few, if any, secrets, and lifetimes are long. In such a society, a strongly enforced “social contract” will come into existence, which will have many of the characteristics of a universal morality. At least within a species. And to some degree, between species.
...or if you think what we ought to be doing is helping to create the thing with the universal moral values.
I’m not really convinced that the convergence will be complete, though. If two advanced alien races meet, they probably won’t agree on all their values—perhaps due to moral spontaneous symmetry breaking—and small differences can become important.