It’s probably not possible to prevent nation-state attacks without nation-state-level assistance on your side. Detecting and preventing moles is something that even the NSA/CIA haven’t been able to fully accomplish.
Truly secure infrastructure would be hardware designed, manufactured, configured, and operated in-house running formally verified software also designed in-house where individual people do not have root on any of the infrastructure and instead software automation manages all operations and requires M out of N people to agree on making changes where M is greater than the expected number of moles in the worst case.
If there’s one thing the above model is, it’s very costly to achieve (in terms of bureaucracy, time, expertise, money). But every exception to the list (remote manufacture, colocated data centers, ad-hoc software development, etc.) introduces significant risk of points of compromise which can spread across the entire organization.
The two FAANGs I’ve been at take the approach of trusting remotely manufactured hardware on two counts; explicitly trusting AMD and Intel not to be compromised, and establishing a tight enough manufacturing relationship with suppliers to have greater trust that backdoors won’t be inserted in hardware and doing their own evaluations of finished hardware. Both of them ran custom firmware on most hardware (chipsets, network cards, hard disks, etc.) to minimize that route of compromise. They also, for the most part, manage their own sets of patches for the open source and free software they run, and have large security teams devoted to finding vulnerabilities and otherwise improving their internal codebase. Patches do get pushed upstream, but they insert themselves very early in responsible disclosures to patch their own systems first before public patches are available. Formal software verification is still in its infancy so lots of unit+integration tests and red-team penetration testing makes up for that a bit.
The AGI infrastructure security problem is therefore pretty sketchy for all but the largest security-focused companies or governments. There are best practices that small companies can do (what I tentatively recommend is “use G-Suite and IAM for security policy, turn on advanced account protection, use Chromebooks, and use GCP for compute; all of which gets 80-90% of the practical protections Googlers have internally) for infrastructure, but rolling their own piecemeal is fraught with risk and also costly. There simply are not public solutions as comprehensive or as well-maintained as what some of the FAANGs have achieved.
On top of infrastructure is the common jumble of machine-learning software pulled together from minimally policed public repositories to build a complex assembly of tools for training and validating models and running experiments. No one seems to have a cohesive story for ML operations, and there’s a large reliance on big complex packages from many vendors (drivers + CUDA + libraries + model frameworks, etc.) that is usually the opposite of security-focused. It doesn’t matter if the infrastructure is solid when a python notebook listens for commands on the public Internet in its default configuration, for example. Writing good ML tooling is also very costly, especially if it keeps up with the state of the art.
AI Alignment is a hard problem and information security is similarly hard because it attempts to enforce a subset of human values about data and resources in a machine-readable and machine-enforceable way. I agree with the authors that security is vitally important for AGI research but I don’t have a lot of hope that it’s achievable where it matters (against hostile nation-states). Security means costs, which usually means slow, which means unaligned AGI makes progress faster.
One thing I wonder: does this reduce the effective spending a company would be willing to make on a large model, given the likelihood of any competitive advantage it’d lend being eroded via cybertheft?
Could this go some of the way to explaining why we don’t get billion-dollar model runs, as opposed to engineering-heavy research which is naturally more distributed and harder to steal?
I’d expect companies to mitigate the risk of model theft with fairly affordable insurance. Movie studios and software companies invest hundreds of millions of dollars into individual easily copy-able MPEGs and executable files. Billion-dollar models probably don’t meet the risk/reward criteria yet. When a $100M model is human-level AGI it will almost certainly be worth the risk of training a $1B model.
This could mitigate financial risk to the company but I don’t think anyone will sell existential risk insurance, or that it would be effective if they did
It’s probably not possible to prevent nation-state attacks without nation-state-level assistance on your side. Detecting and preventing moles is something that even the NSA/CIA haven’t been able to fully accomplish.
Truly secure infrastructure would be hardware designed, manufactured, configured, and operated in-house running formally verified software also designed in-house where individual people do not have root on any of the infrastructure and instead software automation manages all operations and requires M out of N people to agree on making changes where M is greater than the expected number of moles in the worst case.
If there’s one thing the above model is, it’s very costly to achieve (in terms of bureaucracy, time, expertise, money). But every exception to the list (remote manufacture, colocated data centers, ad-hoc software development, etc.) introduces significant risk of points of compromise which can spread across the entire organization.
The two FAANGs I’ve been at take the approach of trusting remotely manufactured hardware on two counts; explicitly trusting AMD and Intel not to be compromised, and establishing a tight enough manufacturing relationship with suppliers to have greater trust that backdoors won’t be inserted in hardware and doing their own evaluations of finished hardware. Both of them ran custom firmware on most hardware (chipsets, network cards, hard disks, etc.) to minimize that route of compromise. They also, for the most part, manage their own sets of patches for the open source and free software they run, and have large security teams devoted to finding vulnerabilities and otherwise improving their internal codebase. Patches do get pushed upstream, but they insert themselves very early in responsible disclosures to patch their own systems first before public patches are available. Formal software verification is still in its infancy so lots of unit+integration tests and red-team penetration testing makes up for that a bit.
The AGI infrastructure security problem is therefore pretty sketchy for all but the largest security-focused companies or governments. There are best practices that small companies can do (what I tentatively recommend is “use G-Suite and IAM for security policy, turn on advanced account protection, use Chromebooks, and use GCP for compute; all of which gets 80-90% of the practical protections Googlers have internally) for infrastructure, but rolling their own piecemeal is fraught with risk and also costly. There simply are not public solutions as comprehensive or as well-maintained as what some of the FAANGs have achieved.
On top of infrastructure is the common jumble of machine-learning software pulled together from minimally policed public repositories to build a complex assembly of tools for training and validating models and running experiments. No one seems to have a cohesive story for ML operations, and there’s a large reliance on big complex packages from many vendors (drivers + CUDA + libraries + model frameworks, etc.) that is usually the opposite of security-focused. It doesn’t matter if the infrastructure is solid when a python notebook listens for commands on the public Internet in its default configuration, for example. Writing good ML tooling is also very costly, especially if it keeps up with the state of the art.
AI Alignment is a hard problem and information security is similarly hard because it attempts to enforce a subset of human values about data and resources in a machine-readable and machine-enforceable way. I agree with the authors that security is vitally important for AGI research but I don’t have a lot of hope that it’s achievable where it matters (against hostile nation-states). Security means costs, which usually means slow, which means unaligned AGI makes progress faster.
One thing I wonder: does this reduce the effective spending a company would be willing to make on a large model, given the likelihood of any competitive advantage it’d lend being eroded via cybertheft? Could this go some of the way to explaining why we don’t get billion-dollar model runs, as opposed to engineering-heavy research which is naturally more distributed and harder to steal?
I’d expect companies to mitigate the risk of model theft with fairly affordable insurance. Movie studios and software companies invest hundreds of millions of dollars into individual easily copy-able MPEGs and executable files. Billion-dollar models probably don’t meet the risk/reward criteria yet. When a $100M model is human-level AGI it will almost certainly be worth the risk of training a $1B model.
This could mitigate financial risk to the company but I don’t think anyone will sell existential risk insurance, or that it would be effective if they did