A simple solution is to just make doctors/hospitals liable for harm which occurs under their watch, period. Do not give them an out involving performative tests which don’t actually reduce harm, or the like. If doctors/hospitals are just generally liable for harm, then they’re incentivized to actually reduce it.
Can you explain more what you actually mean by this? Do you mean if someone comes into the hospital and dies, the doctors are responsible, regardless of why they died? If you mean that we figure out whether the doctors are responsible for whether the patient died, then we get back to whether they have done everything to prevent it, and one of these things might be ordering lab tests to better figure out the diagnosis, and then it seems we’re back to the original problem i.e. the status quo. Just not understanding what you mean.
If someone comes into the hospital and dies, the doctors are responsible, regardless of why they died. Same for injuries, sickness, etc. That would be the simplest and purest version, though it would probably be expensive.
One could maybe adjust in some ways, e.g. the doctors’ responsibility is lessened if the person had some very legible problem from the start (before they showed up to the doctor/hospital), or the doctor’s responsibility is always lessened by some baseline amount corresponding to the (age-adjusted) background rate of death/injury/sickness. But the key part is that a court typically does not ask whether the death/injury/sickness is the doctor’s fault. They just ask whether it occurred under the doctor/hospital’s watch at all.
Let’s say Jane gets in a serious car crash. Without immediate medical care, she will surely die of her acute injuries and blood loss. With the best available medical care, she has an 80% chance of living and recovering.
Theoretically the hospital should be liable in this situation for the marginal changes. So if they manage to save 81% of the car accident victims they face no liability, and if they save 19% they face 1% liability * number of total deaths.
So theoretically Jane’s insurance company would have to pay the cost of the premium for Jane’s care, plus an extra charge for the overhead of the hospital? It might be 0.1% −1% of the total tab per death you mentioned?
Obviously there is too much variance to measure this. So instead what happens is plaintiffs attorneys scour the medical records of some of the patients who died looking for gross negligence in writing, and of course this means the doctors charting everything have an incentive to always say they did everything by the book regardless of what really happened. Basically the liability system barely works.
But in this hypothetical the hospital should be liable on the margin.
That does not sound like an improvement over the current US healthcare system in terms of aligning incentives.
“Better than the current US healthcare system in terms of aligning incentives” is not a high bar to clear. Failing to clear it sounds, to me, like a strong indication that the policy proposal needs work.
I’m not sure how in this case, “AI companies liable for “deepfakes, hallucinations, fake reports” would equate to anything but just an AI ban. Being able to fake an image or generate a plausible report is a core functionality of an LLM, in the same way a chainsaw is able to cut wood and similar materials as a core functionality. Hallucinations also are somewhat core, they are only preventable with secondary checks which will leak some of the time.
You could not sell a chainsaw where the company was liable for any murders committed with the chainsaw. The chainsaw engine has no way of knowing when it is being misused. An AI model in a separate session doesn’t have the context to know when it’s being asked for a deepfake/fake report. And having it refuse when the model suspects this is just irritating and reduces the value of the model.
So the real life liability equivalent is that a chainsaw manufacturer only faces liability around things like a guard the breaks or a lack of a kill switch, etc. And the actual chainsaw users are companies who buy insurance for chainsaw accidents when they send workers equipped with chainsaws to cut trees, etc. Liability law kinda works here, there is armor a user can wear that can protect them. The insurance company may demand that workers always wear their kevlar armor when sawing.
In that case, I agree with you that “make anyone who develops anything that the government considers to be AI liable for all harms related to that thing, including harms from malicious use of the product that were not intended and could not have reasonably been foreseen” is a de-facto ban on developing anything AI-like while having enough money to be worth suing.
I think the hope was that, by saying “this problem (difficulty of guaranteeing that misuse won’t happen) exists, you can’t have nice things (AI) until it’s solved”, that will push people to solve the problem. Attempts like this haven’t worked well in the past (see the Jones Act, which was “this problem (not enough American-built ships) exists, you can’t have nice things (water-based shipping between one US port and another) until it’s fixed”, and I don’t expect they’d work well here either.
So I’ve thought about it and I think @johnswentworthis implicitly thinking about context aware AI products. These would be machines that have an ongoing record of all interactions with a specific human user, like in this story. It may also have other sources of information, like the AI may demand you enable the webcam.
So each request would be more like asking a human professional to do them, and that professional carries liability insurance. Strict liability doesn’t apply to humans, either, though. Humans can say they were lied to or tricked and get let off the hook, but often are found civilly or criminally liable if “they should have known” or they talk in writing in a way that indicates they know they are committing a crime. For example, the FTX group chat named “wire fraud”.
A human professional would know if they were being asked to make a deepfake, they would conscientiously try to not hallucinate and check every fact for a report, and they would know if they are making a fake document. I think @johnswentworth is assume that an AI system will be able to do this soon.
I think context aware AI products, which ironically would be the only way to even try to comply with a law like this, are probably a very bad idea. They are extremely complex and nearly impossible to debug, because the context of a user’s records, which is a file that will be completely unique for every living human that has ever used an AI, determines the output of the system.
Tool AI that works on small, short duration, myopic tasks—which means it will be easy to misuse, just like any tool today can be misused—is probably safer and is definitely easier to debug because the tools behavioral is either correct or not correct within the narrow context the tool was given. “I instructed the AI to paint this car red using standard auto painting techniques, was it painted or not”. If a human was locked in the trunk and died as a consequence, that’s not the painting machine’s fault.
Just for more details on myopia : “paint a car” is not a simple task, but you can subdivide it into many tiny subtasks. Such as “plan where to apply masking” and “apply the masking”. Or “plan spray strokes” and “execute spray stroke n”. You can drill down into an isolated subtask, where “the world” is just what the sensors can perceive inside the car painting bay, and check for task correctness/prediction error and other quantifiable metrics.
Maybe if the $12.5M is paid to Jane when she dies, she could e.g. sign a contract saying that she waives her right to any such payments the hospital becomes liable for.
If you allow the provider of a product or service to contract away their liability, I predict in most cases they will create a standard form contract that they require all customers to sign that transfers 100% of the liability to the customer in ~all circumstances, which presumably defeats the purpose of assigning it to the provider in the first place.
Yes, customers could refuse to sign the contract. But if they were prepared to do that, why haven’t they already demanded a contract in which the provider accepts liability (or provides insurance), and refused to do business without one? Based on my observations, in most cases, ~all customers sign the EULA, and the company won’t even negotiate with anyone who objects because it’s not worth the transaction costs.
Now, even if you allow negotiating liability away, it would still be meaningful to assign the provider liability for harm to third parties, since the provider can’t force third parties to sign a form contract (they will still transfer that liability to the customer, but this leaves the provider as second-in-line to pay, if the customer isn’t caught or can’t pay). So this would matter if you’re selling the train that the customer is going to drive past a flammable field like in the OP’s example. But if you’re going to allow this in the hospital example, I think the hospital doesn’t end up keeping any of the liability John was trying to assign them, and maybe even gets rid of all of their current malpractice liability too.
Can you explain more what you actually mean by this? Do you mean if someone comes into the hospital and dies, the doctors are responsible, regardless of why they died? If you mean that we figure out whether the doctors are responsible for whether the patient died, then we get back to whether they have done everything to prevent it, and one of these things might be ordering lab tests to better figure out the diagnosis, and then it seems we’re back to the original problem i.e. the status quo. Just not understanding what you mean.
If someone comes into the hospital and dies, the doctors are responsible, regardless of why they died. Same for injuries, sickness, etc. That would be the simplest and purest version, though it would probably be expensive.
One could maybe adjust in some ways, e.g. the doctors’ responsibility is lessened if the person had some very legible problem from the start (before they showed up to the doctor/hospital), or the doctor’s responsibility is always lessened by some baseline amount corresponding to the (age-adjusted) background rate of death/injury/sickness. But the key part is that a court typically does not ask whether the death/injury/sickness is the doctor’s fault. They just ask whether it occurred under the doctor/hospital’s watch at all.
Why would you operate a hospital at all under this legal system?
Because people would pay you to take them under your care.
Let’s say Jane gets in a serious car crash. Without immediate medical care, she will surely die of her acute injuries and blood loss. With the best available medical care, she has an 80% chance of living and recovering.
Per NHTSA, the statistical value of a human life is $12.5M. As such, by admitting Jane, the hospital faces $2.5M in expected liability.
Are you expecting that Jane will front the $2.5M? Or do you have some other mechanism in mind?
Theoretically the hospital should be liable in this situation for the marginal changes. So if they manage to save 81% of the car accident victims they face no liability, and if they save 19% they face 1% liability * number of total deaths.
So theoretically Jane’s insurance company would have to pay the cost of the premium for Jane’s care, plus an extra charge for the overhead of the hospital? It might be 0.1% −1% of the total tab per death you mentioned?
Obviously there is too much variance to measure this. So instead what happens is plaintiffs attorneys scour the medical records of some of the patients who died looking for gross negligence in writing, and of course this means the doctors charting everything have an incentive to always say they did everything by the book regardless of what really happened. Basically the liability system barely works.
But in this hypothetical the hospital should be liable on the margin.
That does not sound like an improvement over the current US healthcare system in terms of aligning incentives.
“Better than the current US healthcare system in terms of aligning incentives” is not a high bar to clear. Failing to clear it sounds, to me, like a strong indication that the policy proposal needs work.
I’m not sure how in this case, “AI companies liable for “deepfakes, hallucinations, fake reports” would equate to anything but just an AI ban. Being able to fake an image or generate a plausible report is a core functionality of an LLM, in the same way a chainsaw is able to cut wood and similar materials as a core functionality. Hallucinations also are somewhat core, they are only preventable with secondary checks which will leak some of the time.
You could not sell a chainsaw where the company was liable for any murders committed with the chainsaw. The chainsaw engine has no way of knowing when it is being misused. An AI model in a separate session doesn’t have the context to know when it’s being asked for a deepfake/fake report. And having it refuse when the model suspects this is just irritating and reduces the value of the model.
So the real life liability equivalent is that a chainsaw manufacturer only faces liability around things like a guard the breaks or a lack of a kill switch, etc. And the actual chainsaw users are companies who buy insurance for chainsaw accidents when they send workers equipped with chainsaws to cut trees, etc. Liability law kinda works here, there is armor a user can wear that can protect them. The insurance company may demand that workers always wear their kevlar armor when sawing.
Was this a reply to the correct comment?
(Asking because it’s a pretty coherent reply to my Deepfakes-R-Us scenario from yesterday’s thread)
It was but now I do want to reply to the main post again.
In that case, I agree with you that “make anyone who develops anything that the government considers to be AI liable for all harms related to that thing, including harms from malicious use of the product that were not intended and could not have reasonably been foreseen” is a de-facto ban on developing anything AI-like while having enough money to be worth suing.
I think the hope was that, by saying “this problem (difficulty of guaranteeing that misuse won’t happen) exists, you can’t have nice things (AI) until it’s solved”, that will push people to solve the problem. Attempts like this haven’t worked well in the past (see the Jones Act, which was “this problem (not enough American-built ships) exists, you can’t have nice things (water-based shipping between one US port and another) until it’s fixed”, and I don’t expect they’d work well here either.
So I’ve thought about it and I think @johnswentworth is implicitly thinking about context aware AI products. These would be machines that have an ongoing record of all interactions with a specific human user, like in this story. It may also have other sources of information, like the AI may demand you enable the webcam.
So each request would be more like asking a human professional to do them, and that professional carries liability insurance. Strict liability doesn’t apply to humans, either, though. Humans can say they were lied to or tricked and get let off the hook, but often are found civilly or criminally liable if “they should have known” or they talk in writing in a way that indicates they know they are committing a crime. For example, the FTX group chat named “wire fraud”.
A human professional would know if they were being asked to make a deepfake, they would conscientiously try to not hallucinate and check every fact for a report, and they would know if they are making a fake document. I think @johnswentworth is assume that an AI system will be able to do this soon.
I think context aware AI products, which ironically would be the only way to even try to comply with a law like this, are probably a very bad idea. They are extremely complex and nearly impossible to debug, because the context of a user’s records, which is a file that will be completely unique for every living human that has ever used an AI, determines the output of the system.
Tool AI that works on small, short duration, myopic tasks—which means it will be easy to misuse, just like any tool today can be misused—is probably safer and is definitely easier to debug because the tools behavioral is either correct or not correct within the narrow context the tool was given. “I instructed the AI to paint this car red using standard auto painting techniques, was it painted or not”. If a human was locked in the trunk and died as a consequence, that’s not the painting machine’s fault.
Just for more details on myopia : “paint a car” is not a simple task, but you can subdivide it into many tiny subtasks. Such as “plan where to apply masking” and “apply the masking”. Or “plan spray strokes” and “execute spray stroke n”. You can drill down into an isolated subtask, where “the world” is just what the sensors can perceive inside the car painting bay, and check for task correctness/prediction error and other quantifiable metrics.
Maybe if the $12.5M is paid to Jane when she dies, she could e.g. sign a contract saying that she waives her right to any such payments the hospital becomes liable for.
If you allow the provider of a product or service to contract away their liability, I predict in most cases they will create a standard form contract that they require all customers to sign that transfers 100% of the liability to the customer in ~all circumstances, which presumably defeats the purpose of assigning it to the provider in the first place.
Yes, customers could refuse to sign the contract. But if they were prepared to do that, why haven’t they already demanded a contract in which the provider accepts liability (or provides insurance), and refused to do business without one? Based on my observations, in most cases, ~all customers sign the EULA, and the company won’t even negotiate with anyone who objects because it’s not worth the transaction costs.
Now, even if you allow negotiating liability away, it would still be meaningful to assign the provider liability for harm to third parties, since the provider can’t force third parties to sign a form contract (they will still transfer that liability to the customer, but this leaves the provider as second-in-line to pay, if the customer isn’t caught or can’t pay). So this would matter if you’re selling the train that the customer is going to drive past a flammable field like in the OP’s example. But if you’re going to allow this in the hospital example, I think the hospital doesn’t end up keeping any of the liability John was trying to assign them, and maybe even gets rid of all of their current malpractice liability too.
That’s a bad criterion to use.
See Robin Hanson’s Buy Health proposal for a better option.