Yes, that would be preferable. But only because I assert a correlation between the attributes that produce what we measure as g and with personality traits and actual underlying preferences. A superintelligence extrapolating on ’s preferences would, in fact, produce a different outcome than one extrapolating on .
ArisKataris’s accusation that you don’t understand CEV means misses the mark. You can understand CEV and still not conclude that CEV is necessarily a good thing.
What would that accomplish? It’s the intelligence of the AI that will be getting used, not the intelligence of the people in question.
I’m getting the impression that some people don’t understand what CEV even means. It’s not about the programmers predicting a course of action, it’s not about the AI using people’s current choice, it’s about the AI using the extrapolated volition—what people would choose if they were as smart and knowledgeable as the AI.
Good one according to which criteria? CEV is perfect according to humankind’s criteria if humankind were more intelligent and more sane than it currently is.
Mine. (This is tautological.) Anything else that is kind of similar to mine would be acceptable.
CEV is perfect according to humankind’s criteria if humankind were more intelligent and more sane than it currently is.
Which is fine if ‘sane’ is defined as ‘more like what I would consider ‘sane’. But that’s because sane has all sorts of loaded connotations with respect to actual preferences—and “humanity’s” may very well not qualify as not-insane.
Yes. The CEV really could suck. There isn’t a good reason to assume that particular preference system is a good one.
How about CEV?
Yes, that would be preferable. But only because I assert a correlation between the attributes that produce what we measure as g and with personality traits and actual underlying preferences. A superintelligence extrapolating on ’s preferences would, in fact, produce a different outcome than one extrapolating on .
ArisKataris’s accusation that you don’t understand CEV means misses the mark. You can understand CEV and still not conclude that CEV is necessarily a good thing.
And, uh, how do you define that?
Something like g, perhaps?
What would that accomplish? It’s the intelligence of the AI that will be getting used, not the intelligence of the people in question.
I’m getting the impression that some people don’t understand what CEV even means. It’s not about the programmers predicting a course of action, it’s not about the AI using people’s current choice, it’s about the AI using the extrapolated volition—what people would choose if they were as smart and knowledgeable as the AI.
Good one according to which criteria? CEV is perfect according to humankind’s criteria if humankind were more intelligent and more sane than it currently is.
Mine. (This is tautological.) Anything else that is kind of similar to mine would be acceptable.
Which is fine if ‘sane’ is defined as ‘more like what I would consider ‘sane’. But that’s because sane has all sorts of loaded connotations with respect to actual preferences—and “humanity’s” may very well not qualify as not-insane.