Sorry if it looked like I was asking for a super high level of detail—I’m more wondering what your working examples are, assuming you’re developing this idea with some of those in mind (maybe you’re not thinking of anything concrete at all? I don’t think I could do that, but it wouldn’t surprise me overly much if some people could.)
So while reading your post, I largely had three candidate examples in mind—specifically AI tools purpose-built for malicious purposes, autonomous AI agents causing problems, and general-purpose tools being misused.
Realistic Examples
Case 1: DeepFakes-R-Us and the Gullible Grandmother
A US company[1], DeepFakes-R-Us, releases a tool that allows for real-time modification of video content to transform the speaker’s voice and motions to look like any target for which 30 seconds of video footage can be acquired. DeepFakes-R-Us accepts payment in cryptocurrency, without verifying who is paying for the service. Further, they do not check that the target has consented to having their likeness imitated, nor do they check whether the user is using the service to transform obviously-malicious speech despite being aware that it historically has been abused and abuse-detection tools being widely available.
Someone uses DeepFakes-R-Us to fake a phone call to a wealthy but senile elderly woman, impersonating her grandson, and saying that he has gotten in trouble in a foreign country and needs $100,000 wired to a foreign account to bail him out. The elderly woman wires the money overseas.
In this case, we’d have:
Standing: The financial loss due to fraud constitutes a concrete and particularized injury. She needs to establish a causal link between that injury and the actions of DeepFakes-R-Us. No direct causation exists, but she could argue that DeepFakes-R-Us facilitated the fraud by providing the tool. This injury is redressable: specifically, she can be made whole by giving her the money she lost.
Claims: Probably negligence, ChatGPT claims maybe also something called “vicarious liability” and something else called “failure to warn”.
Proof: Documentation that the fraudulent transaction occurred, evidence that the technology provided by DeepFakes-R-Us was directly used in the fraud and that but for their contribution, the fraud would not have happened
Nature of relief: $$$$.
Case 2: The Hacktastic Stock Trading Bot
An AI stock-trading bot, operated by a financial organization, is programmed to maximize profit in whatever way it can, including an unrestricted internet connection[2]. The bot discovers that if it shorts a company’s stock, and then finds vulnerabilities in that company’s systems and exfiltrates and publishes the data, and publicizes the data breach, it can gain a competitive advantage in trading by knowing that the company’s stock is likely to decrease in value. In one instance, the bot exploits a vulnerability in a healthcare company’s system,[3] leading to a massive data breach. Among the compromised data are sensitive medical records of numerous patients.
One specific individual Jane Doe, was in a highly sensitive occupation (e.g., a covert operative or a public figure in a sensitive role). The data breach exposed her medical records, leading to her immediate dismissal from her position, causing her career damage, severe emotional distress, and financial loss.
In this case, we’d have
Standing: My understanding is that Jane Doe has sustained an injury in the form of financial loss and emotional distress caused by the bot, acting in the interests of the company (this might get legally interesting), and that her injury is redressable through monetary compensation or whatever.
Claims: Probably negligence (the company had a responsibility to ensure that its bot operated within the bounds of the law, and failed to do so), breach of privacy (for the obvious reasons)
Proof: Evidence that that particular bot caused that particular data breach, that the breach caused her dismissal, and that her dismissal harmed her.
Nature of relief: $$$$, hopefully an injunction to make the company stop using the bot.
Case 3: Novel OCR Heist
Milliprog[4] is a multi-billion dollar corporation most known for its wide variety of desktop software for productivity on day-to-day office tasks. This year, it released an exciting new digitization tool, which allows users to OCR an entire book’s worth of paper records, even handwritten ones, in seconds, simply by flipping through the book in front of a webcam.
K. L. Souling is a bestselling author of the well-known and ongoing “Furry Ceramicist” series of novels[4]. She is famously tight-lipped about the future events that will happen in the series. She tells people “if you want to find out, you’ll have to wait and buy the book”. She keeps only one copy of her own notes for the story, handwritten and stored in her office.
One night, one of the cleaning staff in the office pull out Souling’s notes and uses the Milliprog OCR software to scan the Furry Ceramicist plot notes, then publishes those notes online.
Souling has clearly sustained an injury, in the form of lost sales of her books as people just look up the plot online, and Milliprog could redress her injury through monetary compensation ($$$$[5])
However, establishing causation is going to be difficult in this case—I think that would require new laws.
My Thoughts
So in cases 1 and 2, I think hoping for legal liability is sensible, though the cases are likely to be legally interesting.[6]
If you’re hoping to establish precedent that Milliprog should face strict liability in case 3, I think that’s a pretty hard sell.[7]
Short answer is “foreseeability of harm coming from the tool being used as intended”. Law is not computer code, so for example intent and reasonableness matter here.
Long answer should probably be a full post.
I think the train of thought here mostly is that people here implicitly have 2 as their main threat model for how things will actually go wrong in practice, but they want legal precedents to be in place before any actual incidents happen, and as such are hoping that increasing the legal risk of companies doing things like 1 and 3 will work for that purpose.
And I think that, while legal liability for cases like 1 is probably good in egregious cases, extending to that to cases where there is no intent of harm and no reasonable expectation of harm (like 3) is a terrible idea, and separately that pushing for 1 won’t significantly help with 2.
That’s also part of a broader pattern of “let’s figure out what outcomes we want from a policy, and then say that we should advocate for policies that cause those outcomes, and then either leave the choice of specific policy as an exercise for the reader (basically fine) or suggest a policy that will not accomplish those goals and also predictably cause a bunch of terrible outcomes (not so fine)”. But I think the idea that the important part is to come up with the intended outcomes of your policy the rest is just unimportant implementation details is bad, and maybe impactfully so if people are trying to take the Churchill “never let a good crisis go to waste” approach for getting their political agenda implemented (i.e. prepare policy suggestions in advance and then push them really hard once a crisis occurs that plausibly could have been mitigated by your favored policy).
Yeah, after writing that out I really think I need to write a full post here.
But I think the idea that the important part is to come up with the intended outcomes of your policy the rest is just unimportant implementation details is bad
This is the story of a lot of failed policies, especially policies that goodharted on their goals, and I’m extremely scared if people don’t actually understand this and use it.
This is a big flaw of a lot of radical groups, and I see this as a warning sign that your policy proposals aren’t net-positive.
Sorry if it looked like I was asking for a super high level of detail—I’m more wondering what your working examples are, assuming you’re developing this idea with some of those in mind (maybe you’re not thinking of anything concrete at all? I don’t think I could do that, but it wouldn’t surprise me overly much if some people could.)
So while reading your post, I largely had three candidate examples in mind—specifically AI tools purpose-built for malicious purposes, autonomous AI agents causing problems, and general-purpose tools being misused.
Realistic Examples
Case 1: DeepFakes-R-Us and the Gullible Grandmother
A US company[1], DeepFakes-R-Us, releases a tool that allows for real-time modification of video content to transform the speaker’s voice and motions to look like any target for which 30 seconds of video footage can be acquired. DeepFakes-R-Us accepts payment in cryptocurrency, without verifying who is paying for the service. Further, they do not check that the target has consented to having their likeness imitated, nor do they check whether the user is using the service to transform obviously-malicious speech despite being aware that it historically has been abused and abuse-detection tools being widely available.
Someone uses DeepFakes-R-Us to fake a phone call to a wealthy but senile elderly woman, impersonating her grandson, and saying that he has gotten in trouble in a foreign country and needs $100,000 wired to a foreign account to bail him out. The elderly woman wires the money overseas.
In this case, we’d have:
Standing: The financial loss due to fraud constitutes a concrete and particularized injury. She needs to establish a causal link between that injury and the actions of DeepFakes-R-Us. No direct causation exists, but she could argue that DeepFakes-R-Us facilitated the fraud by providing the tool. This injury is redressable: specifically, she can be made whole by giving her the money she lost.
Claims: Probably negligence, ChatGPT claims maybe also something called “vicarious liability” and something else called “failure to warn”.
Proof: Documentation that the fraudulent transaction occurred, evidence that the technology provided by DeepFakes-R-Us was directly used in the fraud and that but for their contribution, the fraud would not have happened
Nature of relief: $$$$.
Case 2: The Hacktastic Stock Trading Bot
An AI stock-trading bot, operated by a financial organization, is programmed to maximize profit in whatever way it can, including an unrestricted internet connection[2]. The bot discovers that if it shorts a company’s stock, and then finds vulnerabilities in that company’s systems and exfiltrates and publishes the data, and publicizes the data breach, it can gain a competitive advantage in trading by knowing that the company’s stock is likely to decrease in value. In one instance, the bot exploits a vulnerability in a healthcare company’s system,[3] leading to a massive data breach. Among the compromised data are sensitive medical records of numerous patients.
One specific individual Jane Doe, was in a highly sensitive occupation (e.g., a covert operative or a public figure in a sensitive role). The data breach exposed her medical records, leading to her immediate dismissal from her position, causing her career damage, severe emotional distress, and financial loss.
In this case, we’d have
Standing: My understanding is that Jane Doe has sustained an injury in the form of financial loss and emotional distress caused by the bot, acting in the interests of the company (this might get legally interesting), and that her injury is redressable through monetary compensation or whatever.
Claims: Probably negligence (the company had a responsibility to ensure that its bot operated within the bounds of the law, and failed to do so), breach of privacy (for the obvious reasons)
Proof: Evidence that that particular bot caused that particular data breach, that the breach caused her dismissal, and that her dismissal harmed her.
Nature of relief: $$$$, hopefully an injunction to make the company stop using the bot.
Case 3: Novel OCR Heist
Milliprog[4] is a multi-billion dollar corporation most known for its wide variety of desktop software for productivity on day-to-day office tasks. This year, it released an exciting new digitization tool, which allows users to OCR an entire book’s worth of paper records, even handwritten ones, in seconds, simply by flipping through the book in front of a webcam.
K. L. Souling is a bestselling author of the well-known and ongoing “Furry Ceramicist” series of novels[4]. She is famously tight-lipped about the future events that will happen in the series. She tells people “if you want to find out, you’ll have to wait and buy the book”. She keeps only one copy of her own notes for the story, handwritten and stored in her office.
One night, one of the cleaning staff in the office pull out Souling’s notes and uses the Milliprog OCR software to scan the Furry Ceramicist plot notes, then publishes those notes online.
Souling has clearly sustained an injury, in the form of lost sales of her books as people just look up the plot online, and Milliprog could redress her injury through monetary compensation ($$$$[5])
However, establishing causation is going to be difficult in this case—I think that would require new laws.
My Thoughts
So in cases 1 and 2, I think hoping for legal liability is sensible, though the cases are likely to be legally interesting.[6]
If you’re hoping to establish precedent that Milliprog should face strict liability in case 3, I think that’s a pretty hard sell.[7]
I hope that clarifies what I was going for.
Sincerely,
I am not a lawyer and this is not legal advice.
Ok, I may have exaggerated a little bit when I called the examples “realistic”.
Always a good plan
I have to take some liberties with the realism, obviously[8] a healthcare provider would never actually use software with exploitable vulnerabilities.
Any similarity to actual persons, living or dead, is purely coincidental. Please don’t sue me.
Shocking that that’s the form of redress, I know.
This is lawyer speak for “the bill will have more digits than you expect”.
Because it’s a terrible idea. Please don’t do this.
This is sarcasm, in case it wasn’t clear.
How do you distinguish your Case 1 from ‘impose vast liability on Adobe for making Photoshop’?
Short answer is “foreseeability of harm coming from the tool being used as intended”. Law is not computer code, so for example intent and reasonableness matter here.
Long answer should probably be a full post.
I think the train of thought here mostly is that people here implicitly have 2 as their main threat model for how things will actually go wrong in practice, but they want legal precedents to be in place before any actual incidents happen, and as such are hoping that increasing the legal risk of companies doing things like 1 and 3 will work for that purpose.
And I think that, while legal liability for cases like 1 is probably good in egregious cases, extending to that to cases where there is no intent of harm and no reasonable expectation of harm (like 3) is a terrible idea, and separately that pushing for 1 won’t significantly help with 2.
That’s also part of a broader pattern of “let’s figure out what outcomes we want from a policy, and then say that we should advocate for policies that cause those outcomes, and then either leave the choice of specific policy as an exercise for the reader (basically fine) or suggest a policy that will not accomplish those goals and also predictably cause a bunch of terrible outcomes (not so fine)”. But I think the idea that the important part is to come up with the intended outcomes of your policy the rest is just unimportant implementation details is bad, and maybe impactfully so if people are trying to take the Churchill “never let a good crisis go to waste” approach for getting their political agenda implemented (i.e. prepare policy suggestions in advance and then push them really hard once a crisis occurs that plausibly could have been mitigated by your favored policy).
Yeah, after writing that out I really think I need to write a full post here.
This is the story of a lot of failed policies, especially policies that goodharted on their goals, and I’m extremely scared if people don’t actually understand this and use it.
This is a big flaw of a lot of radical groups, and I see this as a warning sign that your policy proposals aren’t net-positive.