Similarly, he claims that the bill does not acknowledge trade-offs, but the reasonable care standard is absolutely centered around trade-offs of costs against benefits.
Could somebody elaborate on this?
My understanding is that if a company releases an AI model knowing it can be easily exploited (‘jailbroken’), they could be held legally responsible—even if the model’s potential economic benefits far outweigh its risks.
For example, if a model could generate trillions in economic value but also enable billions in damages through cyberattacks, would releasing it be illegal despite the net positive impact?
Furthermore, while the concept of ‘reasonable care’ allows for some risk, doesn’t it prohibit companies from making decisions based solely on overall societal cost-benefit analysis? In other words, can a company justify releasing a vulnerable AI model just because its benefits outweigh its risks on a societal level?
It seems to me that this would be prohibited under the bill in question, and that very much seems to me to be a bad thing. Destroying lots of potential economic value, while having a negilgible effect on x-risk seems bad. Why not drop everything that isn’t related to x-risk, and increase the demands on reporting, openness, sharing risk-assessments, etc.? Seems far more valuable and easier to comply with.
Yes, we will live in a world where everything will be under (some level of) cyberattack 24⁄7, every identity will have to be questioned, every picture and video will have to somehow be proven to be real, and the absolute most this bill can do is buy us a little bit more time before that starts happening. Why not get used to it now, and try to also maximize the advantages of having access to competent AI models (as long as they aren’t capable of causing x-risks)?
Could somebody elaborate on this?
My understanding is that if a company releases an AI model knowing it can be easily exploited (‘jailbroken’), they could be held legally responsible—even if the model’s potential economic benefits far outweigh its risks.
For example, if a model could generate trillions in economic value but also enable billions in damages through cyberattacks, would releasing it be illegal despite the net positive impact?
Furthermore, while the concept of ‘reasonable care’ allows for some risk, doesn’t it prohibit companies from making decisions based solely on overall societal cost-benefit analysis? In other words, can a company justify releasing a vulnerable AI model just because its benefits outweigh its risks on a societal level?
It seems to me that this would be prohibited under the bill in question, and that very much seems to me to be a bad thing. Destroying lots of potential economic value, while having a negilgible effect on x-risk seems bad. Why not drop everything that isn’t related to x-risk, and increase the demands on reporting, openness, sharing risk-assessments, etc.? Seems far more valuable and easier to comply with.
Yes, we will live in a world where everything will be under (some level of) cyberattack 24⁄7, every identity will have to be questioned, every picture and video will have to somehow be proven to be real, and the absolute most this bill can do is buy us a little bit more time before that starts happening. Why not get used to it now, and try to also maximize the advantages of having access to competent AI models (as long as they aren’t capable of causing x-risks)?