Unfortunately, I think all three of those listed points of view poorly encapsulate anything related to moral worth, and hence evaluating unaligned AIs from them is mostly irrelevant.
They do all capture some fragment of moral worth, and under ordinary circumstances are moderately well correlated with it, but the correlation falls apart out of the distribution of ordinary experience. Unaligned AGI expanding to fill the accessible universe is just about as far out of distribution as it is possible to get.
Unfortunately, I think all three of those listed points of view poorly encapsulate anything related to moral worth, and hence evaluating unaligned AIs from them is mostly irrelevant.
They do all capture some fragment of moral worth, and under ordinary circumstances are moderately well correlated with it, but the correlation falls apart out of the distribution of ordinary experience. Unaligned AGI expanding to fill the accessible universe is just about as far out of distribution as it is possible to get.