There is no ground truth to something as ambiguous as “moral questions” in general, but there is ground truth to, e.g. “do humans on average prefer policy A or policy B when this choice is presented to them?”. There is also ground truth to things like “do humans typically think A or B is more morally correct when this choice is presented to them?”, and even “Would this typical view be stable under a particular program of intelligence enhancement/reflection Z?” (though “is Z the best way to extrapolate humans?” does not have a ground truth).
“What would more humans vote for?” does have a ground truth and predicting it seems to be a kind of thing that would be useful to get practice with human-modeling that could help develop something CEV-like in the future. Whereas “just do what’s right” does not, as you say, have a ground truth.
That means there can only be approval and persuasion in perpetual memetic warfare against other cultures.
If you mean people are going to continue to argue for different value systems, that seems fine to me? And you can still make a decision on what an AI is going to do (e.g. something CEV-like), even if there is no unambiguously correct choice.
There is no ground truth to something as ambiguous as “moral questions” in general, but there is ground truth to, e.g. “do humans on average prefer policy A or policy B when this choice is presented to them?”. There is also ground truth to things like “do humans typically think A or B is more morally correct when this choice is presented to them?”, and even “Would this typical view be stable under a particular program of intelligence enhancement/reflection Z?” (though “is Z the best way to extrapolate humans?” does not have a ground truth).
“What would more humans vote for?” does have a ground truth and predicting it seems to be a kind of thing that would be useful to get practice with human-modeling that could help develop something CEV-like in the future. Whereas “just do what’s right” does not, as you say, have a ground truth.
If you mean people are going to continue to argue for different value systems, that seems fine to me? And you can still make a decision on what an AI is going to do (e.g. something CEV-like), even if there is no unambiguously correct choice.