A hardware protection mechanism that needs to confirm permission to run by periodically dialing home would, even if restricted to large GPU installations, brick any large scientific computing system or NN deployment that needs to be air-gapped (e.g. because it deals with sensitive personal data, or particularly sensitive commercial secrets, or with classified data). Such regulation also provides whoever controls the green light a kill switch against any large GPU application that runs critical infrastructure. Both points would severely damage national security interests.
On the other hand, the doom scenarios this is supposed to protect from would, at least as of the time of writing this, by most cybersecurity professionals probably be viewed as an example of poor threat modelling (in this case, assuming the adversary is essentially almighty and that everything they do will succeed on their first try, whereas anything we try will fail because it is our first try).
In summary, I don’t think this would (or should) fly, but obviously I might be wrong. For a point of reference, techniques similar in spirit have been seriously proposed to regulate use of cryptography (for instance, via adoption of the Clipper chip), but I think it’s fair to say they have not been very successful.
A hardware protection mechanism that needs to confirm permission to run by periodically dialing home would, even if restricted to large GPU installations, brick any large scientific computing system or NN deployment that needs to be air-gapped (e.g. because it deals with sensitive personal data, or particularly sensitive commercial secrets, or with classified data). Such regulation also provides whoever controls the green light a kill switch against any large GPU application that runs critical infrastructure. Both points would severely damage national security interests.
Yup! Probably don’t rely on a completely automated system that only works over the internet for those use cases. There are fairly simple (for bureaucratic definitions of simple) workarounds. The driver doesn’t actually need to send a message anywhere, it just needs a token. Airgapped systems can still be given those small cryptographic tokens in a reasonably secure way (if it is possible to use the system in secure way at all), and for systems where this kind of feature is simply not an option, it’s probably worth having a separate regulatory path. I bet NVIDIA would be happy to set up some additional market segmentation at the right price.
The unstated assumption was that the green light would be controlled by US regulatory entities for hardware sold to US entities. Other countries could have their own agencies, and there would need to be international agreements to stop “jailbroken” hardware from being the default, but I’m primarily concerned about companies under the influence of the US government and its allies anyway (for now, at least).
techniques similar in spirit have been seriously proposed to regulate use of cryptography (for instance, via adoption of the Clipper chip), but I think it’s fair to say they have not been very successful.
I think there’s a meaningful difference between attempts to regulate cryptography and regulating large machine learning deployments; consumers will never interact with the regulatory infrastructure, and the negative externalities are extremely small compared to compromised or banned cryptography.
The regulation is intended to encourage a stable equilibrium among labs that may willingly follow that regulation for profit-motivated reasons.
Extreme threat modeling doesn’t suggest ruling out plans that fail against almighty adversaries, it suggests using security mindset: reduce unnecessary load-bearing assumptions in the story you tell about why your system is secure. The proposal is mostly relying on standard cryptographic assumptions, and doesn’t seem likely to do worse in expectation than no regulation.
There is no problem with air gap. Public key cryptography is a wonderful thing. Let there be a license file, which is a signed statement of hardware ID and duration for which license is valid. You need private key to produce a license file, but public key can be used to verify it. Publish a license server which can verify license files and can be run inside air gapped networks. Done.
A hardware protection mechanism that needs to confirm permission to run by periodically dialing home would, even if restricted to large GPU installations, brick any large scientific computing system or NN deployment that needs to be air-gapped (e.g. because it deals with sensitive personal data, or particularly sensitive commercial secrets, or with classified data). Such regulation also provides whoever controls the green light a kill switch against any large GPU application that runs critical infrastructure. Both points would severely damage national security interests.
On the other hand, the doom scenarios this is supposed to protect from would, at least as of the time of writing this, by most cybersecurity professionals probably be viewed as an example of poor threat modelling (in this case, assuming the adversary is essentially almighty and that everything they do will succeed on their first try, whereas anything we try will fail because it is our first try).
In summary, I don’t think this would (or should) fly, but obviously I might be wrong. For a point of reference, techniques similar in spirit have been seriously proposed to regulate use of cryptography (for instance, via adoption of the Clipper chip), but I think it’s fair to say they have not been very successful.
Yup! Probably don’t rely on a completely automated system that only works over the internet for those use cases. There are fairly simple (for bureaucratic definitions of simple) workarounds. The driver doesn’t actually need to send a message anywhere, it just needs a token. Airgapped systems can still be given those small cryptographic tokens in a reasonably secure way (if it is possible to use the system in secure way at all), and for systems where this kind of feature is simply not an option, it’s probably worth having a separate regulatory path. I bet NVIDIA would be happy to set up some additional market segmentation at the right price.
The unstated assumption was that the green light would be controlled by US regulatory entities for hardware sold to US entities. Other countries could have their own agencies, and there would need to be international agreements to stop “jailbroken” hardware from being the default, but I’m primarily concerned about companies under the influence of the US government and its allies anyway (for now, at least).
I think there’s a meaningful difference between attempts to regulate cryptography and regulating large machine learning deployments; consumers will never interact with the regulatory infrastructure, and the negative externalities are extremely small compared to compromised or banned cryptography.
The regulation is intended to encourage a stable equilibrium among labs that may willingly follow that regulation for profit-motivated reasons.
Extreme threat modeling doesn’t suggest ruling out plans that fail against almighty adversaries, it suggests using security mindset: reduce unnecessary load-bearing assumptions in the story you tell about why your system is secure. The proposal is mostly relying on standard cryptographic assumptions, and doesn’t seem likely to do worse in expectation than no regulation.
There is no problem with air gap. Public key cryptography is a wonderful thing. Let there be a license file, which is a signed statement of hardware ID and duration for which license is valid. You need private key to produce a license file, but public key can be used to verify it. Publish a license server which can verify license files and can be run inside air gapped networks. Done.