Yeah, so it sounds like you’re just agreeing with my primary point.
The original claim that you made was that you wouldn’t be liable if your LLM made “publicly accessible” information available.
I pointed out that this wasn’t so; you could be liable for information that was publicly accessible that an “ordinary person” wouldn’t access it.
And now you’re like “Yeah, could be liable, and that’s a good thing, it’s great.”
So we agree about whether you could be liable, which was my primary point. I wasn’t trying to tell you that was bad in the above; I was just saying “Look, if your defense of 1046 rests on publicly-available information not being a thing for which you could be liable, then your defense rests on a falsehood.”
However, then you shifted to “No, it’s actually a good thing for the LLM maker to be held legally liable if it gives an extra-clear explanation of public information.” That’s a defensible position; but it’s a different position than you originally held.
I also disagree with it. Consider the following two cases:
A youtuber who is to bioengineering as Karpathy is to CS or Three Blue One Brown is to Math makes youtube videos. Students everywhere praise him. In a few years there’s a huge crop of startups populated by people who watched him. One person uses his stuff to help them make a weapon, though, and manages to kill some people. We have strong free-speech norms, though—so he isn’t liable for this.
A LLM that is to bioengineering as Karpathy is to CS or Three Blue One Brown is to Math makes explanations. Students everywhere praise it. In a few years there’s a huge crop of startups populated by people who used it. But one person uses it’s stuff to help him make a weapon, though, and manages to kill some people. Laws like 1047 have been passed, though, so the maker turns out to be liable for this.
I think the above dissymmetry makes no sense. It’s like how we just let coal plants kill people through pollution; while making nuclear plants meet absurd standards so they don’t kill people. “We legally protect knowledge disseminated one way, and in fact try to make easily accessible, and reward educators with status and fame; but we’ll legally punish knowledge disseminated one way, and in fact introduce long-lasting unclear liabilities for it.”
A LLM that is to bioengineering as Karpathy is to CS or Three Blue One Brown is to Math makes explanations. Students everywhere praise it. In a few years there’s a huge crop of startups populated by people who used it. But one person uses it’s stuff to help him make a weapon, though, and manages to kill some people. Laws like 1047 have been passed, though, so the maker turns out to be liable for this.
This still requires that an ordinary person wouldn’t have been able to access the relevant information without the covered model (including with the help of non-covered models, which are accessible to ordinary people). In other words, I think this is wrong:
So, you can be held liable for critical harms even when you supply information that was publicly accessible, if it wasn’t information an “ordinary person” wouldn’t know.
The bill’s text does not constrain the exclusion to information not “known” by an ordinary person, but to information not “publicly accessible” to an ordinary person. That’s a much higher bar given the existence of already quite powerful[1] non-covered models, which make nearly all the information that’s out there available to ordinary people. It looks almost as if it requires the covered model to be doing novel intellectual labor, which is load-bearing for the harm that was caused.
You analogy fails for another reason: an LLM is not a youtuber. If that youtuber was doing personalized 1:1 instruction with many people, one of whom went on to make a novel bioweapon that caused hudreds of millions of dollars of damage, it would be reasonable to check that the youtuber was not actually a co-conspirator, or even using some random schmuck as a patsy. Maybe it turns out the random schmuck was in fact the driving force behind everything, but we find chat logs like this:
Schmuck: “Hey, youtuber, help me design [extremely dangerous bioweapon]!”
Youtuber: “Haha, sure thing! Here are step-by-step instructions.”
Schmuck: “Great! Now help me design a release plan.”
Youtuber: “Of course! Here’s what you need to do for maximum coverage.”
We would correctly throw the book at the youtuber. (Heck, we’d probably do that for providing critical assistance with either step, nevermind both.) What does throwing the book at an LLM look like?
Also, I observe that we do not live in a world where random laypeople frequently watch youtube videos (or consume other static content) and then go on to commit large-scale CBRN attacks. In fact, I’m not sure there’s ever been a case of a layperson carrying out such an attack without the active assistance of domain experts for the “hard parts”. This might have been less true of cyber attacks a few decades ago; some early computer viruses were probably written by relative amateurs and caused a lot of damage. Software security just really sucked. I would pretty surprised if it were still possible for a layperson to do something similar today, without doing enough upskilling that they no longer meaningfully counted as a layperson by the time they’re done.
And so if a few years from now a layperson does a lot of damage by one of these mechanisms, that will be a departure from the current status quo, where the laypeople who are at all motivated to cause that kind of damage are empirically unable to do so without professional assistance. Maybe the departure will turn out to be a dramatic increase in the number of laypeople so motivated, or maybe it turns out we live in the unhappy world where it’s very easy to cause that kind of damage (and we’ve just been unreasonably lucky so far). But I’d bet against those.
ETA: I agree there’s a fundamental asymmetry between “costs” and “benefits” here, but this is in fact analogous to how we treat human actions. We do not generally let people cause mass casualty events because their other work has benefits, even if those benefits are arguably “larger” than the harms.
Yeah, so it sounds like you’re just agreeing with my primary point.
The original claim that you made was that you wouldn’t be liable if your LLM made “publicly accessible” information available.
I pointed out that this wasn’t so; you could be liable for information that was publicly accessible that an “ordinary person” wouldn’t access it.
And now you’re like “Yeah, could be liable, and that’s a good thing, it’s great.”
So we agree about whether you could be liable, which was my primary point. I wasn’t trying to tell you that was bad in the above; I was just saying “Look, if your defense of 1046 rests on publicly-available information not being a thing for which you could be liable, then your defense rests on a falsehood.”
However, then you shifted to “No, it’s actually a good thing for the LLM maker to be held legally liable if it gives an extra-clear explanation of public information.” That’s a defensible position; but it’s a different position than you originally held.
I also disagree with it. Consider the following two cases:
A youtuber who is to bioengineering as Karpathy is to CS or Three Blue One Brown is to Math makes youtube videos. Students everywhere praise him. In a few years there’s a huge crop of startups populated by people who watched him. One person uses his stuff to help them make a weapon, though, and manages to kill some people. We have strong free-speech norms, though—so he isn’t liable for this.
A LLM that is to bioengineering as Karpathy is to CS or Three Blue One Brown is to Math makes explanations. Students everywhere praise it. In a few years there’s a huge crop of startups populated by people who used it. But one person uses it’s stuff to help him make a weapon, though, and manages to kill some people. Laws like 1047 have been passed, though, so the maker turns out to be liable for this.
I think the above dissymmetry makes no sense. It’s like how we just let coal plants kill people through pollution; while making nuclear plants meet absurd standards so they don’t kill people. “We legally protect knowledge disseminated one way, and in fact try to make easily accessible, and reward educators with status and fame; but we’ll legally punish knowledge disseminated one way, and in fact introduce long-lasting unclear liabilities for it.”
This still requires that an ordinary person wouldn’t have been able to access the relevant information without the covered model (including with the help of non-covered models, which are accessible to ordinary people). In other words, I think this is wrong:
The bill’s text does not constrain the exclusion to information not “known” by an ordinary person, but to information not “publicly accessible” to an ordinary person. That’s a much higher bar given the existence of already quite powerful[1] non-covered models, which make nearly all the information that’s out there available to ordinary people. It looks almost as if it requires the covered model to be doing novel intellectual labor, which is load-bearing for the harm that was caused.
You analogy fails for another reason: an LLM is not a youtuber. If that youtuber was doing personalized 1:1 instruction with many people, one of whom went on to make a novel bioweapon that caused hudreds of millions of dollars of damage, it would be reasonable to check that the youtuber was not actually a co-conspirator, or even using some random schmuck as a patsy. Maybe it turns out the random schmuck was in fact the driving force behind everything, but we find chat logs like this:
Schmuck: “Hey, youtuber, help me design [extremely dangerous bioweapon]!”
Youtuber: “Haha, sure thing! Here are step-by-step instructions.”
Schmuck: “Great! Now help me design a release plan.”
Youtuber: “Of course! Here’s what you need to do for maximum coverage.”
We would correctly throw the book at the youtuber. (Heck, we’d probably do that for providing critical assistance with either step, nevermind both.) What does throwing the book at an LLM look like?
Also, I observe that we do not live in a world where random laypeople frequently watch youtube videos (or consume other static content) and then go on to commit large-scale CBRN attacks. In fact, I’m not sure there’s ever been a case of a layperson carrying out such an attack without the active assistance of domain experts for the “hard parts”. This might have been less true of cyber attacks a few decades ago; some early computer viruses were probably written by relative amateurs and caused a lot of damage. Software security just really sucked. I would pretty surprised if it were still possible for a layperson to do something similar today, without doing enough upskilling that they no longer meaningfully counted as a layperson by the time they’re done.
And so if a few years from now a layperson does a lot of damage by one of these mechanisms, that will be a departure from the current status quo, where the laypeople who are at all motivated to cause that kind of damage are empirically unable to do so without professional assistance. Maybe the departure will turn out to be a dramatic increase in the number of laypeople so motivated, or maybe it turns out we live in the unhappy world where it’s very easy to cause that kind of damage (and we’ve just been unreasonably lucky so far). But I’d bet against those.
ETA: I agree there’s a fundamental asymmetry between “costs” and “benefits” here, but this is in fact analogous to how we treat human actions. We do not generally let people cause mass casualty events because their other work has benefits, even if those benefits are arguably “larger” than the harms.
In terms of summarizing, distilling, and explaining humanity’s existing knowledge.