Yes, AI Alignment is not fully solvable. In particular, if an AGI has the ability to self-improve arbitrarily and has a complicated utility function it will not be possible to guarantee that an aligned AI remains aligned.
Yes, AI Alignment is not fully solvable. In particular, if an AGI has the ability to self-improve arbitrarily and has a complicated utility function it will not be possible to guarantee that an aligned AI remains aligned.