Prevent it from becoming a super-intelligence in the first place. You can’t guarantee boxing a fully self-improving AI but if you run an AI on a Macintosh 2 you could probably keep it contained.
You could put it in a box with no gatekeeper or other way to interact with the outside world. It would be completely pointless and probably unethical, but you could do it.
Is there any remotely feasible way for us to contain a superintelligence aside from us also becoming superintelligences?
That’s what MIRI are trying to work out, right?
Prevent it from becoming a super-intelligence in the first place. You can’t guarantee boxing a fully self-improving AI but if you run an AI on a Macintosh 2 you could probably keep it contained.
That doesn’t answer the question.
It does, the answer given is “no”.
You could put it in a box with no gatekeeper or other way to interact with the outside world. It would be completely pointless and probably unethical, but you could do it.
Isn’t this the uncomputable utilon question?