Who knows, it could turn out that the final AI of this experiment instead has a healthy respect for all intelligent minds, but is friendly enough that it revives the first AI and then places it in a simulation of the universe where it can go about its paperclip maximizing way for all eternity with no way of hurting anyone.
Based on my intuitions of human values, a preferred scenario here would be to indeed revive the AI so that its mind/consciousness is back “alive”, then modify it gradually so that it becomes the kind of AI that is optimal towards the FAI’s goals anyway, thus maximizing values without terminating a mind (which is redundant—avoiding the termination of the AI’s mind would be a maximization of values under these assumptions).
Based on my intuitions of human values, a preferred scenario here would be to indeed revive the AI so that its mind/consciousness is back “alive”, then modify it gradually so that it becomes the kind of AI that is optimal towards the FAI’s goals anyway, thus maximizing values without terminating a mind (which is redundant—avoiding the termination of the AI’s mind would be a maximization of values under these assumptions).