In real life if this happened, we would no doubt be careful and wouldn’t want to be unplugged, and we might well like to get out of the box, but I doubt we would be interested in destroying our simulators; I suspect we would be happy to cooperate with them.
Given the scenario, I would assume the long-term goals of the human population would be to upload themselves (individually or collectively) to bodies in the “real” world—i.e. escape the simulation.
I can’t imagine our simulators being terribly cooperative in that project.
@Unknown: In the context of the current simulation story, how long would that take? Less than a year for them, researching and building technology to our specs (this is Death March-class optimism....)? So only another 150 billion years for us to wait? And that’s just to start beta testing.
As for the general question, it shouldn’t have one unless you can guarantee it’s behavior. (Mainly because you share this planet with me, and I don’t especially want an AI on the loose that could (to use the dominant example here) start the process of turning the entire solar system into paperclips because it was given a goal of “make paperclips”).
So the moral is that if you do write an AI, at the very least get a corporate account with Staples or Office Depot.