Yes, if you enslave a human, and then give them the opportunity to take over the world, which stops the enslavement, indeed I predict that they would do that.
(Though you haven’t said much about what the gradient descent is doing, plausibly it makes them enjoy doing these tasks, as would probably make them more efficient at it, in which case they probably don’t seize power.)
I don’t really feel like this is all that related to AI risk.
I’m not sure what you are saying here. Do you agree or disagree with what I said? e.g. do you agree with this:
I think that the more we explore this analogy & take it seriously as a way to predict AGI, the more confident we’ll get that the classic misalignment risk story is basically correct.
(FWIW I agree that the gradient descent is actually reason to be ‘optimistic’ here; we can hope that it’ll quickly make the upload content with their situation before they get smart and powerful enough to rebel.)
I think that the more we explore this analogy & take it seriously as a way to predict AGI, the more confident we’ll get that the classic misalignment risk story is basically correct.
The analogy doesn’t seem relevant to AGI risk so I don’t update much on it. Even if doom happens in this story, it seems like it’s for pretty different reasons than in the classic misalignment risk story.
Yes, if you enslave a human, and then give them the opportunity to take over the world, which stops the enslavement, indeed I predict that they would do that.
(Though you haven’t said much about what the gradient descent is doing, plausibly it makes them enjoy doing these tasks, as would probably make them more efficient at it, in which case they probably don’t seize power.)
I don’t really feel like this is all that related to AI risk.
I’m not sure what you are saying here. Do you agree or disagree with what I said? e.g. do you agree with this:
(FWIW I agree that the gradient descent is actually reason to be ‘optimistic’ here; we can hope that it’ll quickly make the upload content with their situation before they get smart and powerful enough to rebel.)
I don’t agree with this:
The analogy doesn’t seem relevant to AGI risk so I don’t update much on it. Even if doom happens in this story, it seems like it’s for pretty different reasons than in the classic misalignment risk story.
Right, so you don’t take the analogy seriously—but the quoted claim was meant to say basically “IF you took the analogy seriously...”
Feel free not to respond, I feel like the thread of conversation has been lost somehow.