Leopold’s thesis is that there is, or soon will be, a race between the US and China. Certainly there has been talk about this in Washington. The question then is, if the US government wants to keep the Chinese government from having access to AGI tech, what’s the hard step we can block them from reproducing? Cognitive scaffolding doesn’t seem like a good candidate: so far it’s been relatively cheap and easy (the AI startup I work for built pretty effective cognitive scaffolding with a few engineers working for a few months). Making the chips and building an O($100 billion) cluster of them to train the model is Leopold’s candidate for that, and current US GPU and chip-making technology export controls suggest the US government already agrees. If that’s the case, then an adversary stealing the weights to the AGI model we trained becomes a huge national security concern, so security precautions against that become vital, thus the government gets closely involved (perhaps starting by arranging that the ex-head of the NSA joins the board of one leading lab).
Leopold also thinks that the algorithmic efficiency advances, as divisors on the cost of the training cluster required, will become vital national security secrets too — thus the researchers whose heads they’re in will become part of this secretive Manhattan-like project.
To me, this all sounds rather plausible, and distinctly different from the situation for other industries like electricity or aerospace or the Internet that you cite, which all lack such a “hard part” choke-point. The analogy with nuclear weapons, where the hard/monitorable part is the isotope enrichment, seems pretty valid to me. And there are at least early signs that the US government is taking AI seriously, and looking at the situation through the lenses of some combination of AI risk and international competition.
So I see Leopold’s suggestion as coming in two parts a) the military/intelligence services will get involved (which I see as inevitable, and already see evidence is happening) and b) the expense involved, degree of secrecy required, and degree of power that ASI represents means their response will look like more like nationalization followed by the Manhattan Project than a more typical part military-industrial complex. That I view as also fairly plausible, but not indisputable. However, Leopold’s essay has certainly moved the Overton Window on this.
Leopold’s thesis is that there is, or soon will be, a race between the US and China. Certainly there has been talk about this in Washington. The question then is, if the US government wants to keep the Chinese government from having access to AGI tech, what’s the hard step we can block them from reproducing? Cognitive scaffolding doesn’t seem like a good candidate: so far it’s been relatively cheap and easy (the AI startup I work for built pretty effective cognitive scaffolding with a few engineers working for a few months). Making the chips and building an O($100 billion) cluster of them to train the model is Leopold’s candidate for that, and current US GPU and chip-making technology export controls suggest the US government already agrees. If that’s the case, then an adversary stealing the weights to the AGI model we trained becomes a huge national security concern, so security precautions against that become vital, thus the government gets closely involved (perhaps starting by arranging that the ex-head of the NSA joins the board of one leading lab).
Leopold also thinks that the algorithmic efficiency advances, as divisors on the cost of the training cluster required, will become vital national security secrets too — thus the researchers whose heads they’re in will become part of this secretive Manhattan-like project.
To me, this all sounds rather plausible, and distinctly different from the situation for other industries like electricity or aerospace or the Internet that you cite, which all lack such a “hard part” choke-point. The analogy with nuclear weapons, where the hard/monitorable part is the isotope enrichment, seems pretty valid to me. And there are at least early signs that the US government is taking AI seriously, and looking at the situation through the lenses of some combination of AI risk and international competition.
So I see Leopold’s suggestion as coming in two parts a) the military/intelligence services will get involved (which I see as inevitable, and already see evidence is happening) and b) the expense involved, degree of secrecy required, and degree of power that ASI represents means their response will look like more like nationalization followed by the Manhattan Project than a more typical part military-industrial complex. That I view as also fairly plausible, but not indisputable. However, Leopold’s essay has certainly moved the Overton Window on this.