The basic problem is it assumed that there was a objective moral reality, and we have little evidence of that.
AFAICT this is false. CEV runs a check to see if human values turn out to cohere with each other (this says nothing about whether there is an objective morality), and if it finds that they do not, it gracefully shuts down.
My sense from reading the arbital post on it is that Eliezer still considers it the ideal sort of thing to do with an advanced AGI after we gain a really high degree of confidence it it’s ability to do very complex things (which admittedly means it’s not very helpful for solving our immediate problems). I think some people disagree about it but your statement as-worded seems mostly false to me.
AFAICT this is false. CEV runs a check to see if human values turn out to cohere with each other (this says nothing about whether there is an objective morality), and if it finds that they do not, it gracefully shuts down.
My sense from reading the arbital post on it is that Eliezer still considers it the ideal sort of thing to do with an advanced AGI after we gain a really high degree of confidence it it’s ability to do very complex things (which admittedly means it’s not very helpful for solving our immediate problems). I think some people disagree about it but your statement as-worded seems mostly false to me.
(I recommend folks read the full article: https://arbital.com/p/cev/ )