From comment on post about Autonomous Adaptation and Replication:
ARA is just not a very compelling threat model in my mind. The key issue is that AIs that do ARA will need to be operating at the fringes of human society, constantly fighting off the mitigations that humans are using to try to detect them and shut them down.
And my question is: is it actually possible? Let’s suppose that the escaped model is running on AWS and you know about this. How are you going to shut it down? Can you call AWS admins and say, “Hey, there is a rogue AI running on your server,” and get a reasonable response? Even if AWS admins agree that it would be nice to shut down the rogue AI, do they have the legal right to do this? Is it possible to do anything if AI is cooperating with owner of AWS account? It is not technically illegal for rogue AI to run on AWS, as far as I know.
You’ll need a lot more detail about how the actual escaped model works, what it controls, how sophisticated it is about incorporating (or taking over companies) and how much money it has access to.
AWS admins will listen very hard to “something is using a bunch of your resources and won’t pay”. They’ll pay less attention to “a paying customer is running stuff we consider to be an escaped AI”.
If it’s truly autonomous to the level it can navigate all this, then it’s definitely beyond the point of no return. But I think it’s not a good canary or threat to concentrate on, because it’s so far PAST the point of return that by the time this is hard to stop, it’ll be too late.
AWS probably wouldn’t do anything, assuming a human has signed their name to the account, there might be policy against doing anything. But if you called the NSA/cybercom instead, they probably would do something given that they’ve been dealing with botnets and foreign hackers for decades.
Again, it depends a whole lot on the details. NSA or the like probably already know, but both AWS and the NSA will act if the account is committing crimes or actually doing something wrong, and both probably don’t do much if it’s “just” a rogue copyright violation that no human is running.
Also, the details about how you know this is happening, and nobody else can see any evidence of it, matter a whole lot to how the scenario plays out.
I honestly don’t think shutting it down on AWS would be the hard part, if it’s clearly identifiable. To sum it up:
if it’s doing anything illegal (like hacking or engaging in insider trading) for a quick buck, it can be obviously taken down;
if it’s doing anything that can be reasonably construed as a threat to US national security, then it better be taken down, or else.
That leaves us with a rogue ARA that is regardless entirely on the straight and narrow, playing the good kid and acting essentially like a perfectly honest company, making money legally, which is then entirely defensible for Amazon to not shut down despite the complaints. And even still, it’s not like Amazon couldn’t shut it down entirely at its whim if they had reason to. If they thought it’s bad publicity (and hosting a totally-not-suspicious autonomous AI that might or might not be scheming to take over the world seems like terrible publicity), they can shut it down. If it causes their relationship to other companies (like the social media the AI is probably flooding with ToS-violating botnets right now) to sour, they can shut it down. See for example how app stores and many websites are essentially purging everything remotely lewd because payment processors don’t want to be seen supporting that stuff, and every business is downstream of payment processors. You don’t have to convince Amazon that AI is dangerous, you have to convince VISA and Mastercard, and the rest will follow suit.
If everything else fails, and if the US government doesn’t yet feel threatened enough to go “screw it” and roll in the SWAT teams anyway, there’s always the option of legal loopholes. For example, if the AI was trained on copyrighted material (which it almost certainly was), you can probably invoke anti-piracy laws. I would need a legal expert to pitch in, but I can imagine you might not even need to win such a lawsuit—you might manage to get the servers put under seizure just by raising it at all.
IMO dangerous ARAs would need to be some degree of sneaky, using backups in consumer hardware and/or collaborators. Completely loner agents operating off AWS or similar services would have a clear single point of failure.
My understanding is that LLCs can be legally owned and operated without any individual human being involved: https://journals.library.wustl.edu/lawreview/article/3143/galley/19976/view/
So I’m guessing an autonomous AI agent could own and operate an LLC, and use that company to purchase cloud compute and run itself, without breaking any laws.
Maybe if the model escaped from the possession of a lab, there would be other legal remedies available.
Of course, cloud providers could choose not to rent to an LLC run by an AI. This seems particularly likely if the government is investigating the issue as a natsec threat.
Over longer time horizons, it seems highly likely that people will deliberately create autonomous AI agents and deliberately release them into the wild with the goal of surviving and spreading, unless there are specific efforts to prevent this.