In practice no, because you can’t deal with a superintelligence safely. E.g.
You can’t build a computer system that’s robust to auto-exfiltration. I mean, maybe you can, but you’re taking on a whole bunch more cost, and also hoping you didn’t screw up.
You can’t develop this tech without other people stealing it and running it unsafely.
You can’t develop this tech safely at all, because in order to develop it you have to do a lot more than just get a few outputs, you have to, like, debug your code and stuff.
In theory, possibly, but it’s not clear how to save the world given such restricted access. See e.g. https://www.lesswrong.com/posts/NojipcrFFMzNx6Grc/sudo-s-shortform?commentId=onKfTrunn2Q2Gc4Pw
In practice no, because you can’t deal with a superintelligence safely. E.g.
You can’t build a computer system that’s robust to auto-exfiltration. I mean, maybe you can, but you’re taking on a whole bunch more cost, and also hoping you didn’t screw up.
You can’t develop this tech without other people stealing it and running it unsafely.
You can’t develop this tech safely at all, because in order to develop it you have to do a lot more than just get a few outputs, you have to, like, debug your code and stuff.
And so forth. Mainly and so forth.