I agree that an AI (or any other intelligence) cannot predict its own choices (since predicting your own choices is the same as actually choosing, so it’s impossible to know what you’re going to do before you know what you’re going to do).
But the type of “understanding itself” needed to self-improve seems to be of a different type, it needs to understand the algorithms that lead to its decisions, but it doesn’t need to be able to copy them in real time.
I agree that an AI (or any other intelligence) cannot predict its own choices (since predicting your own choices is the same as actually choosing, so it’s impossible to know what you’re going to do before you know what you’re going to do).
But the type of “understanding itself” needed to self-improve seems to be of a different type, it needs to understand the algorithms that lead to its decisions, but it doesn’t need to be able to copy them in real time.