I honestly don’t think shutting it down on AWS would be the hard part, if it’s clearly identifiable. To sum it up:
if it’s doing anything illegal (like hacking or engaging in insider trading) for a quick buck, it can be obviously taken down;
if it’s doing anything that can be reasonably construed as a threat to US national security, then it better be taken down, or else.
That leaves us with a rogue ARA that is regardless entirely on the straight and narrow, playing the good kid and acting essentially like a perfectly honest company, making money legally, which is then entirely defensible for Amazon to not shut down despite the complaints. And even still, it’s not like Amazon couldn’t shut it down entirely at its whim if they had reason to. If they thought it’s bad publicity (and hosting a totally-not-suspicious autonomous AI that might or might not be scheming to take over the world seems like terrible publicity), they can shut it down. If it causes their relationship to other companies (like the social media the AI is probably flooding with ToS-violating botnets right now) to sour, they can shut it down. See for example how app stores and many websites are essentially purging everything remotely lewd because payment processors don’t want to be seen supporting that stuff, and every business is downstream of payment processors. You don’t have to convince Amazon that AI is dangerous, you have to convince VISA and Mastercard, and the rest will follow suit.
If everything else fails, and if the US government doesn’t yet feel threatened enough to go “screw it” and roll in the SWAT teams anyway, there’s always the option of legal loopholes. For example, if the AI was trained on copyrighted material (which it almost certainly was), you can probably invoke anti-piracy laws. I would need a legal expert to pitch in, but I can imagine you might not even need to win such a lawsuit—you might manage to get the servers put under seizure just by raising it at all.
IMO dangerous ARAs would need to be some degree of sneaky, using backups in consumer hardware and/or collaborators. Completely loner agents operating off AWS or similar services would have a clear single point of failure.
I honestly don’t think shutting it down on AWS would be the hard part, if it’s clearly identifiable. To sum it up:
if it’s doing anything illegal (like hacking or engaging in insider trading) for a quick buck, it can be obviously taken down;
if it’s doing anything that can be reasonably construed as a threat to US national security, then it better be taken down, or else.
That leaves us with a rogue ARA that is regardless entirely on the straight and narrow, playing the good kid and acting essentially like a perfectly honest company, making money legally, which is then entirely defensible for Amazon to not shut down despite the complaints. And even still, it’s not like Amazon couldn’t shut it down entirely at its whim if they had reason to. If they thought it’s bad publicity (and hosting a totally-not-suspicious autonomous AI that might or might not be scheming to take over the world seems like terrible publicity), they can shut it down. If it causes their relationship to other companies (like the social media the AI is probably flooding with ToS-violating botnets right now) to sour, they can shut it down. See for example how app stores and many websites are essentially purging everything remotely lewd because payment processors don’t want to be seen supporting that stuff, and every business is downstream of payment processors. You don’t have to convince Amazon that AI is dangerous, you have to convince VISA and Mastercard, and the rest will follow suit.
If everything else fails, and if the US government doesn’t yet feel threatened enough to go “screw it” and roll in the SWAT teams anyway, there’s always the option of legal loopholes. For example, if the AI was trained on copyrighted material (which it almost certainly was), you can probably invoke anti-piracy laws. I would need a legal expert to pitch in, but I can imagine you might not even need to win such a lawsuit—you might manage to get the servers put under seizure just by raising it at all.
IMO dangerous ARAs would need to be some degree of sneaky, using backups in consumer hardware and/or collaborators. Completely loner agents operating off AWS or similar services would have a clear single point of failure.