The same way we do, but faster? Like, if you start out thinking that scandalous-and-gross-sex-practice is bad, you can consider arguments like “disgust is easily culturally trained so it’s a poor measure of morality”, and talk to people so you form an idea of what it’s like to want and do it as a subjective experience (what positive emotions are involved, for example), and do research so you can answer queries like “If we had a brain scanner that could detect brainwashing manipulation, what would it say about people who want that?”.
So the superintelligence builds a model of you and feeds it lots of arguments and memory tape from others and other kinds of information. And then we run into trouble because maybe you end up wanting different things depending on the order it feeds you it, or it tells you to many facts about Deep Ones and it breaks your brain.
How do you identify what knowing better would mean, when you don’t know better yet?
The same way we do, but faster? Like, if you start out thinking that scandalous-and-gross-sex-practice is bad, you can consider arguments like “disgust is easily culturally trained so it’s a poor measure of morality”, and talk to people so you form an idea of what it’s like to want and do it as a subjective experience (what positive emotions are involved, for example), and do research so you can answer queries like “If we had a brain scanner that could detect brainwashing manipulation, what would it say about people who want that?”.
So the superintelligence builds a model of you and feeds it lots of arguments and memory tape from others and other kinds of information. And then we run into trouble because maybe you end up wanting different things depending on the order it feeds you it, or it tells you to many facts about Deep Ones and it breaks your brain.