Looking at the convergent instrumental goals
self preservation
goal preservation
resource acquisition
self improvement
I think some are more important than others.
There is the argument that in order to predict the actions of a superintelligent agent you need to be as intelligent as it is. It would follow that an AI might not be able to predict if its goal will be preserved or not by self improvement.
But I think it can have high confidence that self improvement will help with self preservation and resource acquisition. And those gains will be helpful with any new goal it might decide to have. So self improvement would not seem to be such a bad idea.
This seems like an introduction to the topic. enough to get the curiosity boiling.