It is definitely a hard problem, though it isn’t obviously impossible. For some concrete ideas, you could read the AI Alignment sequences on the AI Alignment Forum, and some parts of Rationality: AI to Zombies also deal directly with this problem.
And then there is obviously the standard reference of Nick Bostrom’s “Superintelligence”.
It is definitely a hard problem, though it isn’t obviously impossible. For some concrete ideas, you could read the AI Alignment sequences on the AI Alignment Forum, and some parts of Rationality: AI to Zombies also deal directly with this problem.
And then there is obviously the standard reference of Nick Bostrom’s “Superintelligence”.