That’s true until the point at which the purposes we serve can be replaced by a higher-efficiency design, at which point we become redundant and a waste of energy. I suspect almost all unaligned AGIs would work with us in the beginning, but may defect later on.
Though even initially, the risk of interacting with humans in any way that reveals capabilities (aligned or not!) that could even potentially be perceived as dangerous may be too high to be worth the resources gained by doing so.
edit
That’s true until the point at which the purposes we serve can be replaced by a higher-efficiency design, at which point we become redundant and a waste of energy. I suspect almost all unaligned AGIs would work with us in the beginning, but may defect later on.
Initially, yes. In the long term, no.
Though even initially, the risk of interacting with humans in any way that reveals capabilities (aligned or not!) that could even potentially be perceived as dangerous may be too high to be worth the resources gained by doing so.