I’m a big fan of work that imagines plausible future scenarios in which governments become more concerned about AI risks. Thank you for this!
I find myself least convinced by the sections about why “hard nationalization” isn’t going to happen. I don’t even necessarily disagree with the conclusion, but I think these sections treat AI like a “normal technology” and many of the reasons listed don’t seem like they would be particularly important if the US was super concerned about AI.
IMO, the main factor determining whether or not hard nationalization is a feasible option is the extent to which the US government is concerned about superintelligence risks.I think the main cruxes will be things like “how concerned is the US about AI risks”, “how concerned is the US about AI misalignment risks”, and “to what extent does the US understand believe that superintelligence is truly a civilization-defining technology”. In the worlds where the US government gets Very Concerned, I suspect the economic/legal challenges would be figured out, and there would either be full nationalization or something that looks pretty indistinguishable from full nationalization for all practical purposes.
Nonetheless, I support this work overall and find myself very supportive of the following statement:
Rather than committing to a specific model of the future, we believe the most effective analysis today will consider a wide range of scenarios that describe actions the US government will take in response to global circumstances. By enumerating many of the plausible scenarios regarding soft nationalization, we believe AI governance researchers can better ground our research in likely futures and design better interventions.
Re: whether total nationalization will happen, I think one of our early takeaways here is that ownership of frontier AI is not the same as control of frontier AI, and also that the US government is likely interested in only certain types of control.
That is, it seems like there’s a number of plausible scenarios where the US government has significant control over AI applications that involve national security (cybersecurity, weapons development, denial of technology to China), but also little-to-no “ownership” of frontier AI labs, and relatively less control over commercial / civilian applications of superintelligent AI. Thereby achieving its goals with less involvement.
From this perspective, it’s a bit more gray what the additional value-add of “ownership” would be for the US government, given the legal / political overhead. Certainly its still possible with significant motivation.
---
One additional thing I think is is that there’s a wide gap between the worldviews of policymakers (focused on current national security concerns, not prioritizing superintelligence scenarios), and AI safety / capabilities researchers (highly focused on superintelligence scenarios, and consequently total nationalization).
Even if “total / hard nationalization” is the end-state, I think its quite possible that this gap will take time to close! Political systems & regulation tend to move a lot slower than technological advances. In the case there’s a 3 − 5 year period where policymakers are “ramping up” to the same level of concern / awareness as AI safety researchers, I expect some of these policy levers will happen during that “ramp-up” period.
I’m a big fan of work that imagines plausible future scenarios in which governments become more concerned about AI risks. Thank you for this!
I find myself least convinced by the sections about why “hard nationalization” isn’t going to happen. I don’t even necessarily disagree with the conclusion, but I think these sections treat AI like a “normal technology” and many of the reasons listed don’t seem like they would be particularly important if the US was super concerned about AI.
IMO, the main factor determining whether or not hard nationalization is a feasible option is the extent to which the US government is concerned about superintelligence risks. I think the main cruxes will be things like “how concerned is the US about AI risks”, “how concerned is the US about AI misalignment risks”, and “to what extent does the US understand believe that superintelligence is truly a civilization-defining technology”. In the worlds where the US government gets Very Concerned, I suspect the economic/legal challenges would be figured out, and there would either be full nationalization or something that looks pretty indistinguishable from full nationalization for all practical purposes.
Nonetheless, I support this work overall and find myself very supportive of the following statement:
Thanks for the feedback, Akash!
Re: whether total nationalization will happen, I think one of our early takeaways here is that ownership of frontier AI is not the same as control of frontier AI, and also that the US government is likely interested in only certain types of control.
That is, it seems like there’s a number of plausible scenarios where the US government has significant control over AI applications that involve national security (cybersecurity, weapons development, denial of technology to China), but also little-to-no “ownership” of frontier AI labs, and relatively less control over commercial / civilian applications of superintelligent AI. Thereby achieving its goals with less involvement.
From this perspective, it’s a bit more gray what the additional value-add of “ownership” would be for the US government, given the legal / political overhead. Certainly its still possible with significant motivation.
---
One additional thing I think is is that there’s a wide gap between the worldviews of policymakers (focused on current national security concerns, not prioritizing superintelligence scenarios), and AI safety / capabilities researchers (highly focused on superintelligence scenarios, and consequently total nationalization).
Even if “total / hard nationalization” is the end-state, I think its quite possible that this gap will take time to close! Political systems & regulation tend to move a lot slower than technological advances. In the case there’s a 3 − 5 year period where policymakers are “ramping up” to the same level of concern / awareness as AI safety researchers, I expect some of these policy levers will happen during that “ramp-up” period.