I think it is possible, if we have a near-miss with alignment, where AI values us, but not above it’s own preservation. Or maybe we could be useful as a test bed of how “primitive races” are likely to try and kill SuperAI.
But 1. this scenario is not what I would bet one and 2. it would still mean the loss of the control of our future for us.
I think it is possible, if we have a near-miss with alignment, where AI values us, but not above it’s own preservation. Or maybe we could be useful as a test bed of how “primitive races” are likely to try and kill SuperAI.
But 1. this scenario is not what I would bet one and 2. it would still mean the loss of the control of our future for us.