but it would be an actual, explicitly encoded/incentivized goal.
The issue is that there is a weakness from arguments ad clippy—you assume that such goal is realisable, to make the argument that there is no absolute morality because that goal won’t converge onto something else. This does nothing to address the question whenever clippy can be constructed at all; if the moral realism is true, clippy can’t be constructed or can’t be arbitrarily intelligent (in which case it is no more interesting than a thermostat which has the goal of keeping constant temperature and won’t adopt any morality).
The issue is that there is a weakness from arguments ad clippy—you assume that such goal is realisable, to make the argument that there is no absolute morality because that goal won’t converge onto something else. This does nothing to address the question whenever clippy can be constructed at all; if the moral realism is true, clippy can’t be constructed or can’t be arbitrarily intelligent (in which case it is no more interesting than a thermostat which has the goal of keeping constant temperature and won’t adopt any morality).