Re: whether total nationalization will happen, I think one of our early takeaways here is that ownership of frontier AI is not the same as control of frontier AI, and also that the US government is likely interested in only certain types of control.
That is, it seems like there’s a number of plausible scenarios where the US government has significant control over AI applications that involve national security (cybersecurity, weapons development, denial of technology to China), but also little-to-no “ownership” of frontier AI labs, and relatively less control over commercial / civilian applications of superintelligent AI. Thereby achieving its goals with less involvement.
From this perspective, it’s a bit more gray what the additional value-add of “ownership” would be for the US government, given the legal / political overhead. Certainly its still possible with significant motivation.
---
One additional thing I think is is that there’s a wide gap between the worldviews of policymakers (focused on current national security concerns, not prioritizing superintelligence scenarios), and AI safety / capabilities researchers (highly focused on superintelligence scenarios, and consequently total nationalization).
Even if “total / hard nationalization” is the end-state, I think its quite possible that this gap will take time to close! Political systems & regulation tend to move a lot slower than technological advances. In the case there’s a 3 − 5 year period where policymakers are “ramping up” to the same level of concern / awareness as AI safety researchers, I expect some of these policy levers will happen during that “ramp-up” period.
Thanks for the feedback, Akash!
Re: whether total nationalization will happen, I think one of our early takeaways here is that ownership of frontier AI is not the same as control of frontier AI, and also that the US government is likely interested in only certain types of control.
That is, it seems like there’s a number of plausible scenarios where the US government has significant control over AI applications that involve national security (cybersecurity, weapons development, denial of technology to China), but also little-to-no “ownership” of frontier AI labs, and relatively less control over commercial / civilian applications of superintelligent AI. Thereby achieving its goals with less involvement.
From this perspective, it’s a bit more gray what the additional value-add of “ownership” would be for the US government, given the legal / political overhead. Certainly its still possible with significant motivation.
---
One additional thing I think is is that there’s a wide gap between the worldviews of policymakers (focused on current national security concerns, not prioritizing superintelligence scenarios), and AI safety / capabilities researchers (highly focused on superintelligence scenarios, and consequently total nationalization).
Even if “total / hard nationalization” is the end-state, I think its quite possible that this gap will take time to close! Political systems & regulation tend to move a lot slower than technological advances. In the case there’s a 3 − 5 year period where policymakers are “ramping up” to the same level of concern / awareness as AI safety researchers, I expect some of these policy levers will happen during that “ramp-up” period.