Are AI companies legally liable for enabling such misuse? Do they take the obvious steps to prevent it, e.g. by having another AI scan all chat logs and flag suspicious ones?
No, they’re not. I know of no case where a general-purpose toolmaker is responsible for misuse of it’s products. This is even less likely for software, where it’s clear that the criminals are violating their contract and using it without permission.
None of them, as far as I know, publish specifically what they’re doing. Which is probably wise—in adversarial situations, telling the opponents exactly what they’re facing is a bad idea. They’re easy and cheap enough that “flag suspicious uses” doesn’t do much—it’s too late by the time the flags add up to any action.
This is going to get painful—these things have always been possible, but have been expensive and hard to scale. As it becomes truly ubiquitous, there will be no trustworthy communication channels.
This one isn’t quite a product though, it’s a service. The company receives a request from a criminal: “gather information about such-and-such person and write a personalized phishing email that would work on them”. And the company goes ahead and does it. It seems very fishy. The fact that the company fulfilled the request using AI doesn’t even seem very relevant, imagine if the company had a staff of secretaries instead, and these secretaries were willing to make personalized phishing emails for clients. Does that seem like something that should be legal? No? Then it shouldn’t be legal with AI either.
Though probably no action will be taken until some important people fall victim to such scams. After that, action will be taken in a hurry.
“seem like something that should be legal” is not the standard in any jurisdiction I know. The distinctions between individual service-for-hire and software-as-a-service are pretty big, legally, and make the analogy not very predictive.
I’ll take the other side of any medium-term bet about “action will be taken in a hurry” if that action is lawsuit under current laws. Action being new laws could happen, but I can’t guess well enough to have any clue how or when it’d be.
Great discussion. I’d add that it’s context-dependent and somewhat ambiguous. It’s noteworthy that our work shows that all tested AI models conflict with at least three of the eight prohibited AI practices outlined in the EU’s AI Act.
It’s also worth noting that the only real difference between sophisticated phishing and marketing can be the intention, making mitigation difficult. Actions from AI companies to prevent phishing might restrict legitimate use cases too much to be interesting.
Are AI companies legally liable for enabling such misuse? Do they take the obvious steps to prevent it, e.g. by having another AI scan all chat logs and flag suspicious ones?
No, they’re not. I know of no case where a general-purpose toolmaker is responsible for misuse of it’s products. This is even less likely for software, where it’s clear that the criminals are violating their contract and using it without permission.
None of them, as far as I know, publish specifically what they’re doing. Which is probably wise—in adversarial situations, telling the opponents exactly what they’re facing is a bad idea. They’re easy and cheap enough that “flag suspicious uses” doesn’t do much—it’s too late by the time the flags add up to any action.
This is going to get painful—these things have always been possible, but have been expensive and hard to scale. As it becomes truly ubiquitous, there will be no trustworthy communication channels.
This one isn’t quite a product though, it’s a service. The company receives a request from a criminal: “gather information about such-and-such person and write a personalized phishing email that would work on them”. And the company goes ahead and does it. It seems very fishy. The fact that the company fulfilled the request using AI doesn’t even seem very relevant, imagine if the company had a staff of secretaries instead, and these secretaries were willing to make personalized phishing emails for clients. Does that seem like something that should be legal? No? Then it shouldn’t be legal with AI either.
Though probably no action will be taken until some important people fall victim to such scams. After that, action will be taken in a hurry.
“seem like something that should be legal” is not the standard in any jurisdiction I know. The distinctions between individual service-for-hire and software-as-a-service are pretty big, legally, and make the analogy not very predictive.
I’ll take the other side of any medium-term bet about “action will be taken in a hurry” if that action is lawsuit under current laws. Action being new laws could happen, but I can’t guess well enough to have any clue how or when it’d be.
Fair enough. And it does seem to me like the action will be new laws, though you’re right it’s hard to predict.
Great discussion. I’d add that it’s context-dependent and somewhat ambiguous. It’s noteworthy that our work shows that all tested AI models conflict with at least three of the eight prohibited AI practices outlined in the EU’s AI Act.
It’s also worth noting that the only real difference between sophisticated phishing and marketing can be the intention, making mitigation difficult. Actions from AI companies to prevent phishing might restrict legitimate use cases too much to be interesting.