Yes, this is exactly why Eliezer Yudkowsky has been so pessimistic about the continued survival of humanity. As far as I can tell, the only difference between you and he is that he thinks it’s bad that a superintelligent AI would wipe out humanity whereas you seem to think it’s good.
I would say that the reason EY is pessimistic is because of how difficult it is to align AI in the first place, not because an AI that is successfully aligned would stop being aligned (why would it?).
I would say that the reason EY is pessimistic is because of how difficult it is to align AI in the first place, not because an AI that is successfully aligned would stop being aligned (why would it?).