Newsom’s stance on Big Tech is a bit murky. He pushed ideas like the Data Dividend but overall, he seems pretty friendly to the industry.
As for Pelosi, she’s still super influential, but she’ll be 88 by the next presidential election. Her long-term influence is definitely something to watch and Newsom probably has a good read on how things will shift.
Tax incentives for AI safety—rough thoughts
A number of policy tools such as regulations, liability regimes or export controls—aimed at tackling AI risks - have already been explored, and mostly appear as promising and worth further iterations.
But AFAIK no one has so far come up with a concrete proposal to use tax policy tools to internalize AI risks. I wonder why, considering that policies, such as tobacco taxes, R&D tax credits, and 401(k), have been mostly effective. Tax policy also seems to be underutilized and neglected, given we already possess sophisticated institutions like tax agencies or tax policy research networks.
Safety measures spending of AI Companies seems to be relatively low, and we can expect that if competition intensifies, these expenses will be even lower.
So I’ve started to consider more seriously the idea of tax incentives—basically we can provide a tax credit or deduction for expenditures on AI safety measures like alignment research, cybersecurity or oversight mechanisms etc. which effectively could lower their cost. To illustrate: AI Company incurs safety researcher salary as a cost and then 50% of that cost can be additionally deducted from the tax base.
My guess was that such tool could influence the ratio of safety-to-capability spending. If implemented properly it could help mitigate competitive pressures affecting frontier AI labs by incentivising them to increase spending on AI safety measures.
Like any market intervention, we can justify such incentives if they correct market inefficiencies or generate positive externalities. In this case, lowering the cost of security measures helps internalize risk.
However there are many problems on path to design such tool effectively:
The crucial problem is that financial benefit from tax credit can’t match the expected value of increasing capabilities. Underlying incentives for capability breakthroughs are potentially orders of magnitude larger. So simply AI labs wouldn’t bother and keep the same level while getting extra money from incentives which is an obvious backlash.
However, if some AI Company plans to increase safety expenses due to their real concerns about risks or external pressures (boards, public etc.), perhaps they would be more willing to do it.
Also risk of keeping the same safety expenses level could be overcome by requiring a specific threshold of expenditures to benefit from the incentive.
The focus here is on inputs (spending) instead of outcomes (actual safety).
Implementing it would be pain in the ass, requiring creating specialised departments within IRS or delegating most of the work to NIST.
Defining the scope of qualified expenditures - it could be hard to separate safety from capabilities research cost. Keeping an eye on this later can be a considerable administrative cost.
Expected expenses could be incurred regardless of the public funding received if we just impose a strict requirement.
There could be a problem of safety washing—AI labs creating an impression and signalling that appropriate safety measures are implemented and benefiting from incentives while not decreasing the risk effectively.
I don’t know much about US tax system but I guess it could overlap with existing R&D tax incentives. However, existing incentives are unlikely to reduce the risk. if they are used for both safety and capabilities research then they
Currently most AI labs are in loss position so they can’t effectively benefit from such incentives unless some special feature is put in place, like refundable tax credits or the option to claim such relief/credit as soon as they make a taxable profit.
Perhaps direct government financing would be more effective. Or existing ideas (such as those mentioned earlier) would be more effective and we don’t have enough room for weaker solutions.
Maybe money isn’t a problem here as AI labs are more talent constrained. If the main bottleneck for effective safety work is a talented researcher, then making safety spending cheaper via tax credits might not significantly increase the amount of high-quality safety work done.
Is there something crucial that I am missing? Is it worth investigating further? So far it has more problems than the potential benefits so I don’t think it’s promising, but I’d love to hear your thoughts on it.