Besides what erratio and Vladimir M have said, which I agree with:
Keeping the AI in a box has already been addressed as a bad solution by Eliezer, but your post indicated no awareness of that. There is no point in posting to LessWrong on a subject that has already been covered in depth unless you have something to add.
LessWrong is about rationality, not AGI, and while there are connections between rationality and AGI, you didn’t make any.
Besides what erratio and Vladimir M have said, which I agree with:
Keeping the AI in a box has already been addressed as a bad solution by Eliezer, but your post indicated no awareness of that. There is no point in posting to LessWrong on a subject that has already been covered in depth unless you have something to add.
LessWrong is about rationality, not AGI, and while there are connections between rationality and AGI, you didn’t make any.