I don’t think so. It would probably make AI alignment easier, if we were able to define morality in a relatively simple way that allowed the AGI to derive the rest logically. That still doesn’t counter the Orthogonality Thesis, in that an AGI doesn’t necessarily have to have morality. We would still have to program it in—it would just be (probably) easier to do that than to find a robust definition of human values.
I don’t think so. It would probably make AI alignment easier, if we were able to define morality in a relatively simple way that allowed the AGI to derive the rest logically. That still doesn’t counter the Orthogonality Thesis, in that an AGI doesn’t necessarily have to have morality. We would still have to program it in—it would just be (probably) easier to do that than to find a robust definition of human values.