I take it to be the case that locating the CEV of mankind is essentially a knowledge-extraction problem—you go about it by interviewing a broad sample of mankind to get their opinions, you then synthesize one or more theories explaining their viewpoint(s), and finally you try to convince them to sign-off on your interpretation of what they believe.
Thx for the link. It appears that Goertzel’s CAV is closer to what I was talking about than Yudkowsky’s CEV. But I strongly doubt that scanning brains will be the best way to acquire knowledge of mankind’s collective volition. At least if we want to have already extracted the CEV before the Singularity.
Non-invasively scanning everyone’s brains to figure out what they want is all very well—but what if we get intelligent machines long before such scans become possible?
The other variation from around then was Roko’s document:
Or—if you are Ben—you could just non-invasively scan their brains!
Thx for the link. It appears that Goertzel’s CAV is closer to what I was talking about than Yudkowsky’s CEV. But I strongly doubt that scanning brains will be the best way to acquire knowledge of mankind’s collective volition. At least if we want to have already extracted the CEV before the Singularity.
My comment on that topic at the time was:
The other variation from around then was Roko’s document:
Bootstrapping Safe AGI Goal Systems—CEV and variants thereof.
I still have considerable difficulty in taking this kind of material seriously.