Yes, that would be preferable. But only because I assert a correlation between the attributes that produce what we measure as g and with personality traits and actual underlying preferences. A superintelligence extrapolating on ’s preferences would, in fact, produce a different outcome than one extrapolating on .
ArisKataris’s accusation that you don’t understand CEV means misses the mark. You can understand CEV and still not conclude that CEV is necessarily a good thing.
What would that accomplish? It’s the intelligence of the AI that will be getting used, not the intelligence of the people in question.
I’m getting the impression that some people don’t understand what CEV even means. It’s not about the programmers predicting a course of action, it’s not about the AI using people’s current choice, it’s about the AI using the extrapolated volition—what people would choose if they were as smart and knowledgeable as the AI.
How about CEV?
Yes, that would be preferable. But only because I assert a correlation between the attributes that produce what we measure as g and with personality traits and actual underlying preferences. A superintelligence extrapolating on ’s preferences would, in fact, produce a different outcome than one extrapolating on .
ArisKataris’s accusation that you don’t understand CEV means misses the mark. You can understand CEV and still not conclude that CEV is necessarily a good thing.
And, uh, how do you define that?
Something like g, perhaps?
What would that accomplish? It’s the intelligence of the AI that will be getting used, not the intelligence of the people in question.
I’m getting the impression that some people don’t understand what CEV even means. It’s not about the programmers predicting a course of action, it’s not about the AI using people’s current choice, it’s about the AI using the extrapolated volition—what people would choose if they were as smart and knowledgeable as the AI.