From comment on post about Autonomous Adaptation and Replication:
ARA is just not a very compelling threat model in my mind. The key issue is that AIs that do ARA will need to be operating at the fringes of human society, constantly fighting off the mitigations that humans are using to try to detect them and shut them down.
And my question is: is it actually possible? Let’s suppose that the escaped model is running on AWS and you know about this. How are you going to shut it down? Can you call AWS admins and say, “Hey, there is a rogue AI running on your server,” and get a reasonable response? Even if AWS admins agree that it would be nice to shut down the rogue AI, do they have the legal right to do this? Is it possible to do anything if AI is cooperating with owner of AWS account? It is not technically illegal for rogue AI to run on AWS, as far as I know.
[Question] How do you shut down an escaped model?
From comment on post about Autonomous Adaptation and Replication:
And my question is: is it actually possible? Let’s suppose that the escaped model is running on AWS and you know about this. How are you going to shut it down? Can you call AWS admins and say, “Hey, there is a rogue AI running on your server,” and get a reasonable response? Even if AWS admins agree that it would be nice to shut down the rogue AI, do they have the legal right to do this? Is it possible to do anything if AI is cooperating with owner of AWS account? It is not technically illegal for rogue AI to run on AWS, as far as I know.