I think that you need to distinguish two different goals:
the very ambitious goal of eliminating any risk of misaligned AI doing any significant damage. If even possible, that would require an aligned AI with much stronger capabilities than the misaligned one (or many aligned AIs such that their combined capabilities are not easily matched)
the more limited goal to reduce extinction risk by AGI to a low enough level (say, comparable to asteroid risk or natural pathogen risk). This might manageble with the help of lesser AIs, depending on time to prepare
I think that you need to distinguish two different goals:
the very ambitious goal of eliminating any risk of misaligned AI doing any significant damage. If even possible, that would require an aligned AI with much stronger capabilities than the misaligned one (or many aligned AIs such that their combined capabilities are not easily matched)
the more limited goal to reduce extinction risk by AGI to a low enough level (say, comparable to asteroid risk or natural pathogen risk). This might manageble with the help of lesser AIs, depending on time to prepare
I agree this is a good distinction.