I would argue additionally that the chief issue of AI alignment is not that AIs won’t know what we want.
Getting to know what you want is easy, getting them to care is hard.
A superintelligent AI will understand what humans want at least as well as humans, possibly much better. They might just not—truly, intrinsically—care.
I would argue additionally that the chief issue of AI alignment is not that AIs won’t know what we want.
Getting to know what you want is easy, getting them to care is hard.
A superintelligent AI will understand what humans want at least as well as humans, possibly much better. They might just not—truly, intrinsically—care.