The part that dealt with ethics was incredibly naive. About 47 minutes in, for example, he is counseling us not to fear ET, because ET’s morality will inevitably be superior to our own.
This seems pretty daft to me too. It looks like a kind of moral realism—according to which being eaten by aliens might well be “good”—since it leads to more “goodness”.
I have some sympathies for the idea that convergent evolution is likely to eventually result in a universal morality—rather than, say, pebble sorters and baby eaters. If true, that might be considered to be a kind of moral realism.
It is a kind of moral realism if you add in the proclamation that one ought to do now that which we all converge toward doing later. Plus you probably need some kind of argument that the limit of the convergence is pretty much independent of the starting point.
My own viewpoint on morality is closely related to this. I think that what one morally ought to do now is the same as what one prudentially and pragmatically ought to do in an ideal world in which all agents are rational, communication between agents is cheap, there are few, if any, secrets, and lifetimes are long. In such a society, a strongly enforced “social contract” will come into existence, which will have many of the characteristics of a universal morality. At least within a species. And to some degree, between species.
It is a kind of moral realism if you add in the proclamation that one ought to do now that which we all converge toward doing later.
...or if you think what we ought to be doing is helping to create the thing with the universal moral values.
I’m not really convinced that the convergence will be complete, though. If two advanced alien races meet, they probably won’t agree on all their values—perhaps due to moral spontaneous symmetry breaking—and small differences can become important.
This seems pretty daft to me too. It looks like a kind of moral realism—according to which being eaten by aliens might well be “good”—since it leads to more “goodness”.
Right. But moral realism is not necessarily daft. It only becomes so when you add in universalism and a stricture against self-indexicality.
I have some sympathies for the idea that convergent evolution is likely to eventually result in a universal morality—rather than, say, pebble sorters and baby eaters. If true, that might be considered to be a kind of moral realism.
It is a kind of moral realism if you add in the proclamation that one ought to do now that which we all converge toward doing later. Plus you probably need some kind of argument that the limit of the convergence is pretty much independent of the starting point.
My own viewpoint on morality is closely related to this. I think that what one morally ought to do now is the same as what one prudentially and pragmatically ought to do in an ideal world in which all agents are rational, communication between agents is cheap, there are few, if any, secrets, and lifetimes are long. In such a society, a strongly enforced “social contract” will come into existence, which will have many of the characteristics of a universal morality. At least within a species. And to some degree, between species.
...or if you think what we ought to be doing is helping to create the thing with the universal moral values.
I’m not really convinced that the convergence will be complete, though. If two advanced alien races meet, they probably won’t agree on all their values—perhaps due to moral spontaneous symmetry breaking—and small differences can become important.