AI Governance Researcher at Convergence Analysis.
Deric Cheng
Aligning AI Safety Projects with a Republican Administration
AI Model Registries: A Foundational Tool for AI Governance
Thanks for the feedback, Akash!
Re: whether total nationalization will happen, I think one of our early takeaways here is that ownership of frontier AI is not the same as control of frontier AI, and also that the US government is likely interested in only certain types of control.
That is, it seems like there’s a number of plausible scenarios where the US government has significant control over AI applications that involve national security (cybersecurity, weapons development, denial of technology to China), but also little-to-no “ownership” of frontier AI labs, and relatively less control over commercial / civilian applications of superintelligent AI. Thereby achieving its goals with less involvement.
From this perspective, it’s a bit more gray what the additional value-add of “ownership” would be for the US government, given the legal / political overhead. Certainly its still possible with significant motivation.
---
One additional thing I think is is that there’s a wide gap between the worldviews of policymakers (focused on current national security concerns, not prioritizing superintelligence scenarios), and AI safety / capabilities researchers (highly focused on superintelligence scenarios, and consequently total nationalization).
Even if “total / hard nationalization” is the end-state, I think its quite possible that this gap will take time to close! Political systems & regulation tend to move a lot slower than technological advances. In the case there’s a 3 − 5 year period where policymakers are “ramping up” to the same level of concern / awareness as AI safety researchers, I expect some of these policy levers will happen during that “ramp-up” period.
That’s a very good point! Technically he’s retired, but I wonder how much his appointment is related to preparing for potential futures where OpenAI needs to coordinate with the US government on cybersecurity issues...
Soft Nationalization: how the USG will control AI labs
2024 State of the AI Regulatory Landscape
Totally agree on the UBI being equivalent to a negative income tax in many ways! My main argument here is that UBI is a non-realistic policy when you actually practically implementing it, whereas NIT is the same general outcome but significantly more realistic. If you use the phrase UBI as the “high-level vision” and actually mean “implement it as a NIT” in terms of policy, I can get behind that.
Re: the simplicity idea, repeating what I left in a comment above:
Personally, I really don’t get the “easy to maintain” argument for UBI, esp. given my analysis above. You’d rather have a program that costs $4 trillion with zero maintenance costs, than a similarly impactful program that costs $~650 billion with maintenance costs? It’s kind of a reductive argument that only makes sense when you don’t look at the actual numbers behind implementing a policy idea.
Re: “UBI in the context of automation”, that’s a great point and I can definitely see what you’re getting at! The answer is that this is part 1 of a 2-part series—Part 1 is how to implement UBI realistically and Part 2 is how to pay for it. Paying for it is an equally or even more interesting problem.
Re: penalizing productivity, it’s pretty unclear from the research whether NIT actually reduces employment (the main side effect of penalizing productivity). Of course theoretically it should, but the data isn’t really conclusive in either direction. Bunch of links above.
A modified EITC wouldn’t have pressure to dismantle the current welfare system because it’s a LOT cheaper than 40% of the US budget. Adding a pure UBI on top of the existing welfare systems would make redistribution like 70-80% of the US budget, which is a pretty dicey political stance.
Personally, I really don’t get the “easy to maintain” argument for UBI, esp. given my analysis above. You’d rather have a program that costs $4 trillion with zero maintenance costs, than a similarly impactful program that costs $~650 billion with maintenance costs? It’s kind of a reductive argument that only makes sense when you don’t look at the actual numbers behind implementing a policy idea.
I’d definitely agree with the perspective you’re sharing!
Even in a fast-takeoff / total nationalization scenario, I don’t think politicians will be so blindsided that the political discussion will go from “regulation as usual” to “total nationalization” in a couple of months. It’s possible, but unlikely.
I think it’s equally / more likely that the amount of government involvement will scale over 2 − 5 years, and during that time a lot of these policy levers will be attempted. The success / failure of some of these policy levers to achieve US goals will probably determine if involvement proceeds all the way to “total nationalization”.