Currently we do not know how to build intelligent machines. When we do then we can apply those machines to learning human values. If a machine is sufficiently intelligent to pose an existential threat to humanity then it is sufficiently intelligent to learn human values.
Luke:
I generally agree that superintelligent machines capable of destroying humanity will be capable of learning human values and maximizing for them if “learn human values and maximize for them” is a coherent request at all. But capability does not employ motivation.
It seems like you were talking past each other here, and never got that fully resolved. Bill is entirely correct that a sufficiently intelligent machine would be able to learn human values. A UFAI might be motivated to do so for the purpose of manipulating humans. Making the AGI motivated to maximize human values is the hard part.
Bill:
Luke:
It seems like you were talking past each other here, and never got that fully resolved. Bill is entirely correct that a sufficiently intelligent machine would be able to learn human values. A UFAI might be motivated to do so for the purpose of manipulating humans. Making the AGI motivated to maximize human values is the hard part.