For #1, all we can do is give it a better approximation of inductive inference. For #2 we can state the values in more ontology-independent terms.
These are both incredibly difficult to do when you don’t know (and probably can’t imagine) what kind of ontological crises the AI will face.
These are both incredibly difficult to do when you don’t know (and probably can’t imagine) what kind of ontological crises the AI will face.