I think what I’m getting towards is there’s a difference between human preferences and human preference for other humans. And by human preferences, I mean my own.
That is one objection to Coherent Extrapolated Volition (CEV), i.e. that human values are too diverse.
Though the space of possible futures that an AGI could spit out is VERY large compared to the space of futures people would want, even if one takes into consideration the diversity of human values.
That is one objection to Coherent Extrapolated Volition (CEV), i.e. that human values are too diverse. Though the space of possible futures that an AGI could spit out is VERY large compared to the space of futures people would want, even if one takes into consideration the diversity of human values.