Indeed, my sense is that getting to the point where it’s possible to give an AI formally specified preferences that relate to the real world is where most of the difficulty of the alignment problem is. See: ontology identification problem.
Indeed, my sense is that getting to the point where it’s possible to give an AI formally specified preferences that relate to the real world is where most of the difficulty of the alignment problem is. See: ontology identification problem.