You mention in the replies you want strict liability, or de-facto liability, where if harm occurs, even if the AI company took a countermeasure, the AI company is liable for the full damages.
As I understand it, current liability law does not function this way. You gave an example of “problem was fixed by putting a little rubber wheel on the landing gear lever.” in the case of “flaps and landing gear were identical and right next to each other.”
All real solutions to engineering problems are probabilistic. There is no such thing as an absolute fix. The little rubber wheel is not enough, under some circumstances a pilot will still grab the wrong lever and deploy it. The mitigation just reduced the chances of that happening.
What would happen in a court is that the company may win the liability case when an aircraft crash happens, because the pilot obviously didn’t feel for the wheel before pulling the lever. This allows the blame to be shifted to the pilot, because the rubber wheel was due diligence.
You gave some examples of harms you wanted AI not to commit: deep fakes, hallucinations, fake reports.
In each case, there are mitigations, where we can add additional filters to reduce how much these happen. But it is not possible to prevent them, especially because users are permitted to open a fresh context window and try as many times as they want. Eventually they will find a way past the filters.
This would be shown in logs in court cases over this, and be used to shift the blame to the user.
A strict liability scheme needs the manufacturer to be able to control the users. For instance, in such a world, all pilots would have to be approved by an aircraft manufacturer to be able to fly. All drivers would need automaker approval, and the car would automatically detect bad drivers and ban them from ever using the company’s products once the poor driving is detected. And AI companies probably couldn’t have public users, it would have to be all company employees.
It’s not a world we are familiar with. Do you agree with this user problem @johnswentworth?I think you may have neglected in your analysis 2 crucial factors :
1. Context. All current machines, including AI, cannot reliably determine when they are being misused. For example a terrain avoidance warning system in an aircraft cannot know when it is faulty, so the pilot can override it.
2. Pathological users. A strict liability world means pilots who deliberately crash their planes, and reckless human drivers, both cause liability for the manufacturer. It is not possible for the manufacturers with current technology to fix this issue due to 1.
Ok some clarifying discussion.
You mention in the replies you want strict liability, or de-facto liability, where if harm occurs, even if the AI company took a countermeasure, the AI company is liable for the full damages.
As I understand it, current liability law does not function this way. You gave an example of “problem was fixed by putting a little rubber wheel on the landing gear lever.” in the case of “flaps and landing gear were identical and right next to each other.”
All real solutions to engineering problems are probabilistic. There is no such thing as an absolute fix. The little rubber wheel is not enough, under some circumstances a pilot will still grab the wrong lever and deploy it. The mitigation just reduced the chances of that happening.
What would happen in a court is that the company may win the liability case when an aircraft crash happens, because the pilot obviously didn’t feel for the wheel before pulling the lever. This allows the blame to be shifted to the pilot, because the rubber wheel was due diligence.
You gave some examples of harms you wanted AI not to commit: deep fakes, hallucinations, fake reports.
In each case, there are mitigations, where we can add additional filters to reduce how much these happen. But it is not possible to prevent them, especially because users are permitted to open a fresh context window and try as many times as they want. Eventually they will find a way past the filters.
This would be shown in logs in court cases over this, and be used to shift the blame to the user.
A strict liability scheme needs the manufacturer to be able to control the users. For instance, in such a world, all pilots would have to be approved by an aircraft manufacturer to be able to fly. All drivers would need automaker approval, and the car would automatically detect bad drivers and ban them from ever using the company’s products once the poor driving is detected. And AI companies probably couldn’t have public users, it would have to be all company employees.
It’s not a world we are familiar with. Do you agree with this user problem @johnswentworth ? I think you may have neglected in your analysis 2 crucial factors :
1. Context. All current machines, including AI, cannot reliably determine when they are being misused. For example a terrain avoidance warning system in an aircraft cannot know when it is faulty, so the pilot can override it.
2. Pathological users. A strict liability world means pilots who deliberately crash their planes, and reckless human drivers, both cause liability for the manufacturer. It is not possible for the manufacturers with current technology to fix this issue due to 1.