In that case, I agree with you that “make anyone who develops anything that the government considers to be AI liable for all harms related to that thing, including harms from malicious use of the product that were not intended and could not have reasonably been foreseen” is a de-facto ban on developing anything AI-like while having enough money to be worth suing.
I think the hope was that, by saying “this problem (difficulty of guaranteeing that misuse won’t happen) exists, you can’t have nice things (AI) until it’s solved”, that will push people to solve the problem. Attempts like this haven’t worked well in the past (see the Jones Act, which was “this problem (not enough American-built ships) exists, you can’t have nice things (water-based shipping between one US port and another) until it’s fixed”, and I don’t expect they’d work well here either.
So I’ve thought about it and I think @johnswentworthis implicitly thinking about context aware AI products. These would be machines that have an ongoing record of all interactions with a specific human user, like in this story. It may also have other sources of information, like the AI may demand you enable the webcam.
So each request would be more like asking a human professional to do them, and that professional carries liability insurance. Strict liability doesn’t apply to humans, either, though. Humans can say they were lied to or tricked and get let off the hook, but often are found civilly or criminally liable if “they should have known” or they talk in writing in a way that indicates they know they are committing a crime. For example, the FTX group chat named “wire fraud”.
A human professional would know if they were being asked to make a deepfake, they would conscientiously try to not hallucinate and check every fact for a report, and they would know if they are making a fake document. I think @johnswentworth is assume that an AI system will be able to do this soon.
I think context aware AI products, which ironically would be the only way to even try to comply with a law like this, are probably a very bad idea. They are extremely complex and nearly impossible to debug, because the context of a user’s records, which is a file that will be completely unique for every living human that has ever used an AI, determines the output of the system.
Tool AI that works on small, short duration, myopic tasks—which means it will be easy to misuse, just like any tool today can be misused—is probably safer and is definitely easier to debug because the tools behavioral is either correct or not correct within the narrow context the tool was given. “I instructed the AI to paint this car red using standard auto painting techniques, was it painted or not”. If a human was locked in the trunk and died as a consequence, that’s not the painting machine’s fault.
Just for more details on myopia : “paint a car” is not a simple task, but you can subdivide it into many tiny subtasks. Such as “plan where to apply masking” and “apply the masking”. Or “plan spray strokes” and “execute spray stroke n”. You can drill down into an isolated subtask, where “the world” is just what the sensors can perceive inside the car painting bay, and check for task correctness/prediction error and other quantifiable metrics.
It was but now I do want to reply to the main post again.
In that case, I agree with you that “make anyone who develops anything that the government considers to be AI liable for all harms related to that thing, including harms from malicious use of the product that were not intended and could not have reasonably been foreseen” is a de-facto ban on developing anything AI-like while having enough money to be worth suing.
I think the hope was that, by saying “this problem (difficulty of guaranteeing that misuse won’t happen) exists, you can’t have nice things (AI) until it’s solved”, that will push people to solve the problem. Attempts like this haven’t worked well in the past (see the Jones Act, which was “this problem (not enough American-built ships) exists, you can’t have nice things (water-based shipping between one US port and another) until it’s fixed”, and I don’t expect they’d work well here either.
So I’ve thought about it and I think @johnswentworth is implicitly thinking about context aware AI products. These would be machines that have an ongoing record of all interactions with a specific human user, like in this story. It may also have other sources of information, like the AI may demand you enable the webcam.
So each request would be more like asking a human professional to do them, and that professional carries liability insurance. Strict liability doesn’t apply to humans, either, though. Humans can say they were lied to or tricked and get let off the hook, but often are found civilly or criminally liable if “they should have known” or they talk in writing in a way that indicates they know they are committing a crime. For example, the FTX group chat named “wire fraud”.
A human professional would know if they were being asked to make a deepfake, they would conscientiously try to not hallucinate and check every fact for a report, and they would know if they are making a fake document. I think @johnswentworth is assume that an AI system will be able to do this soon.
I think context aware AI products, which ironically would be the only way to even try to comply with a law like this, are probably a very bad idea. They are extremely complex and nearly impossible to debug, because the context of a user’s records, which is a file that will be completely unique for every living human that has ever used an AI, determines the output of the system.
Tool AI that works on small, short duration, myopic tasks—which means it will be easy to misuse, just like any tool today can be misused—is probably safer and is definitely easier to debug because the tools behavioral is either correct or not correct within the narrow context the tool was given. “I instructed the AI to paint this car red using standard auto painting techniques, was it painted or not”. If a human was locked in the trunk and died as a consequence, that’s not the painting machine’s fault.
Just for more details on myopia : “paint a car” is not a simple task, but you can subdivide it into many tiny subtasks. Such as “plan where to apply masking” and “apply the masking”. Or “plan spray strokes” and “execute spray stroke n”. You can drill down into an isolated subtask, where “the world” is just what the sensors can perceive inside the car painting bay, and check for task correctness/prediction error and other quantifiable metrics.