One could certainly argue that improving an existing system while keeping its goals the same may be an easier (or at least different) problem to solve than creating a system from scratch and instilling some particular set of values into it (where part of the problem is to even find a way to formalize the values, or know what the values are to begin with—both of which would be fully solved for an already existing system that tries to improve itself).
I would be very surprised if an AGI would find no way at all to improve its capabilities without affecting its future goals.
One could certainly argue that improving an existing system while keeping its goals the same may be an easier (or at least different) problem to solve than creating a system from scratch and instilling some particular set of values into it (where part of the problem is to even find a way to formalize the values, or know what the values are to begin with—both of which would be fully solved for an already existing system that tries to improve itself).
I would be very surprised if an AGI would find no way at all to improve its capabilities without affecting its future goals.