Summary: It looks like you are trying to extend liability law to apply in situations it doesn’t currently cover. Currently, a foundation model company could develop a model that is extremely capable, but not directly offer it as an end product in a liability generating situation. Other companies would license the base model, and deploy it as a psychologist or radiologist assistant or to control robots, etc. These companies would be responsible for testing and licensing and liability insurance, and if they create more liability that their insurance can handle, these companies would fail, blowing alike a fuse. This structure protects the foundation model company. I believe you wish to extend liability law to apply to the foundation model itself.
The argument has been made that OSHA has made many factories in the USA and Europe uneconomical, so that countries with less worker protections and cheaper labor (China, Mexico, Phillipines, etc) have a competitive advantage. (and cheaper labor regardless).
Youtube videos of actual factories in China provide direct evidence that China’s equivalent to OSHA is clearly laxer. I can link some if this point is disputed.
So this seems to be a subset of the general argument against any kind of restriction of AI that will net decelerate it’s development and adoption. Which slam into the problem that these desires for laws and restrictions are well meaning...but what exactly happens if foreign companies, exempt from liability and receiving direct government support, start offering the best and clearly most capable models?
Do Western companies:
1. License the model. How? Who pays for the liability insurance? It’s hosted in foreign data centers. Doesn’t seem like this would happen.
2. Whatever amazing new tech that foreign companies can develop with their capable AIs, Westerners will be forced to just pay for it, the same way today Western companies hold the IP for the some of the most valuable products.
3. Militarily this is not a good situation to be in.
So it seems to be the same coordination problem that applies to any other AI restrictions. All major powers need to go along with it or it’s actually a bad idea to restrict anything.
Summary: It looks like you are trying to extend liability law to apply in situations it doesn’t currently cover. Currently, a foundation model company could develop a model that is extremely capable, but not directly offer it as an end product in a liability generating situation. Other companies would license the base model, and deploy it as a psychologist or radiologist assistant or to control robots, etc. These companies would be responsible for testing and licensing and liability insurance, and if they create more liability that their insurance can handle, these companies would fail, blowing alike a fuse. This structure protects the foundation model company. I believe you wish to extend liability law to apply to the foundation model itself.
The argument has been made that OSHA has made many factories in the USA and Europe uneconomical, so that countries with less worker protections and cheaper labor (China, Mexico, Phillipines, etc) have a competitive advantage. (and cheaper labor regardless).
Youtube videos of actual factories in China provide direct evidence that China’s equivalent to OSHA is clearly laxer. I can link some if this point is disputed.
So this seems to be a subset of the general argument against any kind of restriction of AI that will net decelerate it’s development and adoption. Which slam into the problem that these desires for laws and restrictions are well meaning...but what exactly happens if foreign companies, exempt from liability and receiving direct government support, start offering the best and clearly most capable models?
Do Western companies:
1. License the model. How? Who pays for the liability insurance? It’s hosted in foreign data centers. Doesn’t seem like this would happen.
2. Whatever amazing new tech that foreign companies can develop with their capable AIs, Westerners will be forced to just pay for it, the same way today Western companies hold the IP for the some of the most valuable products.
3. Militarily this is not a good situation to be in.
So it seems to be the same coordination problem that applies to any other AI restrictions. All major powers need to go along with it or it’s actually a bad idea to restrict anything.