The Metamorphosis of Prime Intellect covered this — the AI treated human sabotage like a kindly parent would treat an angry child: tolerance for the sabotage attempts, in the knowledge that it would be entirely futile.
I guess it depends on exactly how friendly the AI is, how much it wants to avoid non-existence, and how vulnerable it is.
That seems to be a plausible course of action if the AI(s) were in an unchallengeable position. But how would they get there without resolving the question prior?
The Metamorphosis of Prime Intellect covered this — the AI treated human sabotage like a kindly parent would treat an angry child: tolerance for the sabotage attempts, in the knowledge that it would be entirely futile.
I guess it depends on exactly how friendly the AI is, how much it wants to avoid non-existence, and how vulnerable it is.
That seems to be a plausible course of action if the AI(s) were in an unchallengeable position. But how would they get there without resolving the question prior?