Theoretically the hospital should be liable in this situation for the marginal changes. So if they manage to save 81% of the car accident victims they face no liability, and if they save 19% they face 1% liability * number of total deaths.
So theoretically Jane’s insurance company would have to pay the cost of the premium for Jane’s care, plus an extra charge for the overhead of the hospital? It might be 0.1% −1% of the total tab per death you mentioned?
Obviously there is too much variance to measure this. So instead what happens is plaintiffs attorneys scour the medical records of some of the patients who died looking for gross negligence in writing, and of course this means the doctors charting everything have an incentive to always say they did everything by the book regardless of what really happened. Basically the liability system barely works.
But in this hypothetical the hospital should be liable on the margin.
That does not sound like an improvement over the current US healthcare system in terms of aligning incentives.
“Better than the current US healthcare system in terms of aligning incentives” is not a high bar to clear. Failing to clear it sounds, to me, like a strong indication that the policy proposal needs work.
I’m not sure how in this case, “AI companies liable for “deepfakes, hallucinations, fake reports” would equate to anything but just an AI ban. Being able to fake an image or generate a plausible report is a core functionality of an LLM, in the same way a chainsaw is able to cut wood and similar materials as a core functionality. Hallucinations also are somewhat core, they are only preventable with secondary checks which will leak some of the time.
You could not sell a chainsaw where the company was liable for any murders committed with the chainsaw. The chainsaw engine has no way of knowing when it is being misused. An AI model in a separate session doesn’t have the context to know when it’s being asked for a deepfake/fake report. And having it refuse when the model suspects this is just irritating and reduces the value of the model.
So the real life liability equivalent is that a chainsaw manufacturer only faces liability around things like a guard the breaks or a lack of a kill switch, etc. And the actual chainsaw users are companies who buy insurance for chainsaw accidents when they send workers equipped with chainsaws to cut trees, etc. Liability law kinda works here, there is armor a user can wear that can protect them. The insurance company may demand that workers always wear their kevlar armor when sawing.
In that case, I agree with you that “make anyone who develops anything that the government considers to be AI liable for all harms related to that thing, including harms from malicious use of the product that were not intended and could not have reasonably been foreseen” is a de-facto ban on developing anything AI-like while having enough money to be worth suing.
I think the hope was that, by saying “this problem (difficulty of guaranteeing that misuse won’t happen) exists, you can’t have nice things (AI) until it’s solved”, that will push people to solve the problem. Attempts like this haven’t worked well in the past (see the Jones Act, which was “this problem (not enough American-built ships) exists, you can’t have nice things (water-based shipping between one US port and another) until it’s fixed”, and I don’t expect they’d work well here either.
So I’ve thought about it and I think @johnswentworthis implicitly thinking about context aware AI products. These would be machines that have an ongoing record of all interactions with a specific human user, like in this story. It may also have other sources of information, like the AI may demand you enable the webcam.
So each request would be more like asking a human professional to do them, and that professional carries liability insurance. Strict liability doesn’t apply to humans, either, though. Humans can say they were lied to or tricked and get let off the hook, but often are found civilly or criminally liable if “they should have known” or they talk in writing in a way that indicates they know they are committing a crime. For example, the FTX group chat named “wire fraud”.
A human professional would know if they were being asked to make a deepfake, they would conscientiously try to not hallucinate and check every fact for a report, and they would know if they are making a fake document. I think @johnswentworth is assume that an AI system will be able to do this soon.
I think context aware AI products, which ironically would be the only way to even try to comply with a law like this, are probably a very bad idea. They are extremely complex and nearly impossible to debug, because the context of a user’s records, which is a file that will be completely unique for every living human that has ever used an AI, determines the output of the system.
Tool AI that works on small, short duration, myopic tasks—which means it will be easy to misuse, just like any tool today can be misused—is probably safer and is definitely easier to debug because the tools behavioral is either correct or not correct within the narrow context the tool was given. “I instructed the AI to paint this car red using standard auto painting techniques, was it painted or not”. If a human was locked in the trunk and died as a consequence, that’s not the painting machine’s fault.
Just for more details on myopia : “paint a car” is not a simple task, but you can subdivide it into many tiny subtasks. Such as “plan where to apply masking” and “apply the masking”. Or “plan spray strokes” and “execute spray stroke n”. You can drill down into an isolated subtask, where “the world” is just what the sensors can perceive inside the car painting bay, and check for task correctness/prediction error and other quantifiable metrics.
Theoretically the hospital should be liable in this situation for the marginal changes. So if they manage to save 81% of the car accident victims they face no liability, and if they save 19% they face 1% liability * number of total deaths.
So theoretically Jane’s insurance company would have to pay the cost of the premium for Jane’s care, plus an extra charge for the overhead of the hospital? It might be 0.1% −1% of the total tab per death you mentioned?
Obviously there is too much variance to measure this. So instead what happens is plaintiffs attorneys scour the medical records of some of the patients who died looking for gross negligence in writing, and of course this means the doctors charting everything have an incentive to always say they did everything by the book regardless of what really happened. Basically the liability system barely works.
But in this hypothetical the hospital should be liable on the margin.
That does not sound like an improvement over the current US healthcare system in terms of aligning incentives.
“Better than the current US healthcare system in terms of aligning incentives” is not a high bar to clear. Failing to clear it sounds, to me, like a strong indication that the policy proposal needs work.
I’m not sure how in this case, “AI companies liable for “deepfakes, hallucinations, fake reports” would equate to anything but just an AI ban. Being able to fake an image or generate a plausible report is a core functionality of an LLM, in the same way a chainsaw is able to cut wood and similar materials as a core functionality. Hallucinations also are somewhat core, they are only preventable with secondary checks which will leak some of the time.
You could not sell a chainsaw where the company was liable for any murders committed with the chainsaw. The chainsaw engine has no way of knowing when it is being misused. An AI model in a separate session doesn’t have the context to know when it’s being asked for a deepfake/fake report. And having it refuse when the model suspects this is just irritating and reduces the value of the model.
So the real life liability equivalent is that a chainsaw manufacturer only faces liability around things like a guard the breaks or a lack of a kill switch, etc. And the actual chainsaw users are companies who buy insurance for chainsaw accidents when they send workers equipped with chainsaws to cut trees, etc. Liability law kinda works here, there is armor a user can wear that can protect them. The insurance company may demand that workers always wear their kevlar armor when sawing.
Was this a reply to the correct comment?
(Asking because it’s a pretty coherent reply to my Deepfakes-R-Us scenario from yesterday’s thread)
It was but now I do want to reply to the main post again.
In that case, I agree with you that “make anyone who develops anything that the government considers to be AI liable for all harms related to that thing, including harms from malicious use of the product that were not intended and could not have reasonably been foreseen” is a de-facto ban on developing anything AI-like while having enough money to be worth suing.
I think the hope was that, by saying “this problem (difficulty of guaranteeing that misuse won’t happen) exists, you can’t have nice things (AI) until it’s solved”, that will push people to solve the problem. Attempts like this haven’t worked well in the past (see the Jones Act, which was “this problem (not enough American-built ships) exists, you can’t have nice things (water-based shipping between one US port and another) until it’s fixed”, and I don’t expect they’d work well here either.
So I’ve thought about it and I think @johnswentworth is implicitly thinking about context aware AI products. These would be machines that have an ongoing record of all interactions with a specific human user, like in this story. It may also have other sources of information, like the AI may demand you enable the webcam.
So each request would be more like asking a human professional to do them, and that professional carries liability insurance. Strict liability doesn’t apply to humans, either, though. Humans can say they were lied to or tricked and get let off the hook, but often are found civilly or criminally liable if “they should have known” or they talk in writing in a way that indicates they know they are committing a crime. For example, the FTX group chat named “wire fraud”.
A human professional would know if they were being asked to make a deepfake, they would conscientiously try to not hallucinate and check every fact for a report, and they would know if they are making a fake document. I think @johnswentworth is assume that an AI system will be able to do this soon.
I think context aware AI products, which ironically would be the only way to even try to comply with a law like this, are probably a very bad idea. They are extremely complex and nearly impossible to debug, because the context of a user’s records, which is a file that will be completely unique for every living human that has ever used an AI, determines the output of the system.
Tool AI that works on small, short duration, myopic tasks—which means it will be easy to misuse, just like any tool today can be misused—is probably safer and is definitely easier to debug because the tools behavioral is either correct or not correct within the narrow context the tool was given. “I instructed the AI to paint this car red using standard auto painting techniques, was it painted or not”. If a human was locked in the trunk and died as a consequence, that’s not the painting machine’s fault.
Just for more details on myopia : “paint a car” is not a simple task, but you can subdivide it into many tiny subtasks. Such as “plan where to apply masking” and “apply the masking”. Or “plan spray strokes” and “execute spray stroke n”. You can drill down into an isolated subtask, where “the world” is just what the sensors can perceive inside the car painting bay, and check for task correctness/prediction error and other quantifiable metrics.