“Universal values” presumably refers to values the universe will converge on, once living systems have engulfed most of it.
If rerunning the clock produces radically different moralities each time, the relativists would be considered to be correct.
If rerunning the clock produces highly similar moralities, then the moral objectivists will be able to declare victory.
Yeah, but Stefan’s post was about AI, not about minds that evolved in our universe.
Also, there is a difference between moral universalism and moral objectivism. What your last sentence describes is universalism, while Stefan is talking about objectivism:
“My claim is that compassion is a universal rational moral value. Meaning any sufficiently rational mind will recognize it as such.”
The idea that physics makes no mention of morality seems totally and utterly irrelevant to me. Physics makes no mention of convection, diffusion-limited aggregation, or fractal drainage patterns either—yet those things are all universal.
Yeah, but Stefan’s post was about AI, not about minds that evolved in our universe.
Also, there is a difference between moral universalism and moral objectivism. What your last sentence describes is universalism, while Stefan is talking about objectivism:
“My claim is that compassion is a universal rational moral value. Meaning any sufficiently rational mind will recognize it as such.”
Agreed.
Assuming that I’m right about this:
http://alife.co.uk/essays/engineered_future/
...it seems likely that most future agents will be engineered. So, I think we are pretty-much talking about the same thing.
Re: universalism vs objectivism—note that he does use the “u” word.