I think that there are more chances that unaligned AI will preserve some humans than that we solve alignment. I estimate the first to have 5 per cent probability and the second is 0.1 per cent event.
That’s true until the point at which the purposes we serve can be replaced by a higher-efficiency design, at which point we become redundant and a waste of energy. I suspect almost all unaligned AGIs would work with us in the beginning, but may defect later on.
Though even initially, the risk of interacting with humans in any way that reveals capabilities (aligned or not!) that could even potentially be perceived as dangerous may be too high to be worth the resources gained by doing so.
I think that there are more chances that unaligned AI will preserve some humans than that we solve alignment. I estimate the first to have 5 per cent probability and the second is 0.1 per cent event.
edit
That’s true until the point at which the purposes we serve can be replaced by a higher-efficiency design, at which point we become redundant and a waste of energy. I suspect almost all unaligned AGIs would work with us in the beginning, but may defect later on.
Initially, yes. In the long term, no.
Though even initially, the risk of interacting with humans in any way that reveals capabilities (aligned or not!) that could even potentially be perceived as dangerous may be too high to be worth the resources gained by doing so.