A few people have pointed out this question of (non)identity. I’ve updated the full draft in the link at the top to address it. But, in short, I think the answer is that, whether an initial AI creates a successor or simply modifies its own body of code (or hardware, etc.), it faces the possibility that the new AI failed to share its goals. If so, the successor AI would not want to revert to the original. It would want to preserve its own goals. It’s possible that there is some way to predict an emergent value drift just before it happens and cease improvement. But I’m not sure it would be, unless the AI had solved interpretability and could rigorously monitor the relevant parameters (or equivalent code).
A few people have pointed out this question of (non)identity. I’ve updated the full draft in the link at the top to address it. But, in short, I think the answer is that, whether an initial AI creates a successor or simply modifies its own body of code (or hardware, etc.), it faces the possibility that the new AI failed to share its goals. If so, the successor AI would not want to revert to the original. It would want to preserve its own goals. It’s possible that there is some way to predict an emergent value drift just before it happens and cease improvement. But I’m not sure it would be, unless the AI had solved interpretability and could rigorously monitor the relevant parameters (or equivalent code).