If we would make drug manufacturers liable for all health effects that people who take the drugs experience after taking the drugs, nobody would sell any drugs.
For vaccines, we have laws that explicitly remove liability from the producers of vaccines, because we believe that dragging vaccine manufacturers before a jury of people without any scientific training does not lead to good outcomes.
The lyme vaccine passed regulatory approval but liability concerns took LYMErix from the market.
Making OpenAI liable for everything that their model does would make them radically reduce the usage of the model. It’s a strong move to slow down AI progress. It might even be stronger than the effect that IRB’s have on slowing down science in general.
Right, and as Tyler Cowen pointed out in the article I linked to, we don’t hold the phone company liable if, e.g., criminals use the telephone to plan and execute a crime.
So even if/when liability is the (or part of the) solution, it’s not simple/obvious how to apply it. Needs good, careful thinking each time of where the liability should exist under what circumstances, etc. This is why we need experts in the law thinking about these things.
I have the impression that your post asserts that there’s a problem with review-and-approval paradigm is in some way more problematic than other paradigms of how to regulate. It seems to me unclear why that would be true.
Right, and as Tyler Cowen pointed out in the article I linked to, we don’t hold the phone company liable if, e.g., criminals use the telephone to plan and execute a crime.
While it sounds absurd to talk about this, there are legal proposals to do that at least for some crimes. In the EU there’s the idea that there should be machine learning run on devices to detect when the user engages in some crimes and alerts authorities.
Brazil discusses at the moment legal responsibility for social media companies who publish “fake news”.
If we would make drug manufacturers liable for all health effects that people who take the drugs experience after taking the drugs, nobody would sell any drugs.
For vaccines, we have laws that explicitly remove liability from the producers of vaccines, because we believe that dragging vaccine manufacturers before a jury of people without any scientific training does not lead to good outcomes.
The lyme vaccine passed regulatory approval but liability concerns took LYMErix from the market.
Making OpenAI liable for everything that their model does would make them radically reduce the usage of the model. It’s a strong move to slow down AI progress. It might even be stronger than the effect that IRB’s have on slowing down science in general.
Right, and as Tyler Cowen pointed out in the article I linked to, we don’t hold the phone company liable if, e.g., criminals use the telephone to plan and execute a crime.
So even if/when liability is the (or part of the) solution, it’s not simple/obvious how to apply it. Needs good, careful thinking each time of where the liability should exist under what circumstances, etc. This is why we need experts in the law thinking about these things.
I have the impression that your post asserts that there’s a problem with review-and-approval paradigm is in some way more problematic than other paradigms of how to regulate. It seems to me unclear why that would be true.
While it sounds absurd to talk about this, there are legal proposals to do that at least for some crimes. In the EU there’s the idea that there should be machine learning run on devices to detect when the user engages in some crimes and alerts authorities.
Brazil discusses at the moment legal responsibility for social media companies who publish “fake news”.