Regarding the two ways you enumerate in which AI alignment could serve to further existential safety, I think a third, more viable, way is missing:
AI alignment solutions allow humans to build powerful AI systems that behave as planned without compromising existential safety.
I presume that it is desirable to build powerful AI systems—either to do object-level useful things, or to help humanity regulate other AI systems. There is a family of arguments that I associate with Bostrom and Yudkowsky that it is difficult to align such powerful AI systems that are aligned with what their creator wants them to do, either for ‘outer alignment’ reasons of difficulty in objective specification, or for ‘inner alignment’ reasons of inherent difficulties in optimization. This family of arguments also advances the idea that such alignment failures can have consequences that compromise existential safety. If you believe these arguments, then it appears to me that AI alignment solutions are necessary, but not sufficient, for existential safety.
Regarding the two ways you enumerate in which AI alignment could serve to further existential safety, I think a third, more viable, way is missing:
AI alignment solutions allow humans to build powerful AI systems that behave as planned without compromising existential safety.
I presume that it is desirable to build powerful AI systems—either to do object-level useful things, or to help humanity regulate other AI systems. There is a family of arguments that I associate with Bostrom and Yudkowsky that it is difficult to align such powerful AI systems that are aligned with what their creator wants them to do, either for ‘outer alignment’ reasons of difficulty in objective specification, or for ‘inner alignment’ reasons of inherent difficulties in optimization. This family of arguments also advances the idea that such alignment failures can have consequences that compromise existential safety. If you believe these arguments, then it appears to me that AI alignment solutions are necessary, but not sufficient, for existential safety.