This post consistently considers AI to be a “product.” It discusses insuring products (like cars), compares to insuring the product photoshop, and so on.
But AI isn’t like that! Llama-2 isn’t a product—by itself, it’s relatively useless, particularly the base model. It’s a component of a product, like steel or plastic or React or Typescript. It can be used in a chatbot, in a summarization application, in a robot service-representative app, in a tutoring tool, a flashcards app, and so on and so forth.
Non-LLM things—like segmentation models—are even further from being a product than LLMs.
If it makes sense to get liability insurance for the open-source framework React, then it would make sense for AI. But it doesn’t at all! The only insurance that I know of are for things that are high-level final results in the value chain, rather than low-level items like steel or plastic.
I think it pretty obvious that requiring steel companies to get insurance for the misuse of their steel is a bad idea, one that this post… just sidesteps?
Now we have the machinery to properly reply to that comment. In short: it’s a decent analogy (assuming there’s some lawsuit-able harm from fake driver’s licenses). The part I disagree with is the predicted result. What I actually think would happen is that Photoshop would be mildly more expensive, and would contain code which tries to recognize and stop things like editing a photo of a driver’s license. Or they’d just eat the cost without any guardrails at all, if users really hated the guardrails and were willing to pay enough extra to cover liability.
What’s weird about this post is that, until modern DL-based computer vision was invented, this would have actually been an enormous pain—honestly, one that I think would be quite possibly impossible to implement effectively. Prior to DL it would be even more unlikely that you could, for instance, make it impossible to use photoshop to make porn of someone without also disabling legitimate use—yet the original post wants to sue ML companies on the basis that their technology being used for that. I dunno man.
I assume various businesses insure intermediate goods pretty often? Also, whenever two businesses sign a contract for a big order of e.g. steel, the lawyers spend a bunch of time hashing out who will be liable for a gazillion different kinds of problems, and often part of the answer will be “company X is generally responsible for the bulk of problems, in exchange for a price somewhat favoring them”.
Not sure why this seems so crazy to you, it seems fairly normal to me.
What’s weird about this post is that, until modern DL-based computer vision was invented, this would have actually been an enormous pain—honestly, one that I think would be quite possibly impossible to implement effectively.
Yeah, so that’s a case where the company/consumers would presumably eat the cost, as long as the product is delivering value in excess of the harms.
This post consistently considers AI to be a “product.” It discusses insuring products (like cars), compares to insuring the product photoshop, and so on.
But AI isn’t like that! Llama-2 isn’t a product—by itself, it’s relatively useless, particularly the base model. It’s a component of a product, like steel or plastic or React or Typescript. It can be used in a chatbot, in a summarization application, in a robot service-representative app, in a tutoring tool, a flashcards app, and so on and so forth.
Non-LLM things—like segmentation models—are even further from being a product than LLMs.
If it makes sense to get liability insurance for the open-source framework React, then it would make sense for AI. But it doesn’t at all! The only insurance that I know of are for things that are high-level final results in the value chain, rather than low-level items like steel or plastic.
I think it pretty obvious that requiring steel companies to get insurance for the misuse of their steel is a bad idea, one that this post… just sidesteps?
What’s weird about this post is that, until modern DL-based computer vision was invented, this would have actually been an enormous pain—honestly, one that I think would be quite possibly impossible to implement effectively. Prior to DL it would be even more unlikely that you could, for instance, make it impossible to use photoshop to make porn of someone without also disabling legitimate use—yet the original post wants to sue ML companies on the basis that their technology being used for that. I dunno man.
I assume various businesses insure intermediate goods pretty often? Also, whenever two businesses sign a contract for a big order of e.g. steel, the lawyers spend a bunch of time hashing out who will be liable for a gazillion different kinds of problems, and often part of the answer will be “company X is generally responsible for the bulk of problems, in exchange for a price somewhat favoring them”.
Not sure why this seems so crazy to you, it seems fairly normal to me.
Yeah, so that’s a case where the company/consumers would presumably eat the cost, as long as the product is delivering value in excess of the harms.