You might be right, but let me make the case that AI won’t be slowed by the US government. Concentrated interests beat diffuse interests so an innovation that promises to slightly raise economic growth but harms, say, lawyers could be politically defeated by lawyers because they would care more about the innovation than anyone else. But, ignoring the possibility of unaligned AI, AI promises to give significant net economic benefit to nearly everyone, even those who jobs it threatens consequently there will not be coalitions to stop it, unless the dangers of unaligned AI become politically salient. The US, furthermore, will rightfully fear that if it slows the development of AI, it gives the lead to China, and this could be militarily, economically, and culturally devastating to US dominance. Finally, big tech has enormous political power with its campaign donations and control of social media and so politicians are unlikely to go against the will of big tech on something big tech cares a lot about.
Your last point seems like it agrees with point 7e becoming reality, where the US govt essentially allows existing big tech companies to pursue AI within certain ‘acceptable’ confines they think of at the time. In that case how much AI might be slowed is entirely dependent on how tight a leash they put them on. I think that scenario is actually quite likely given I am sure there is considerable overlap between US alphabet agencies and sectors of big tech.
I agree that competition with China is a plausible reason regulation won’t happen; that will certainly be one of the arguments advanced by industry and NatSec as to why it should not be throttled. However, I’m not sure, and currently don’t think it will, be stronger than the protectionist impulses,. Possibly it will exacerbate the “centralization” of AI dynamic that I listed in the ‘licensing’ bullet point, where large existing players receive money and de-facto license to operate in certain areas and then avoid others (as memeticimagery points out). So for instance we see more military style research, and GooAmBookSoft tacitly agree to not deploy AI that would replace lawyers.
To your point on big tech’s political influence; they have, in some absolute sense, a lot of political power, but relatively they are much weaker in political influence than peer industries. I think they’ve benefitted a lot from the R-D stalemate in DC; I’m positing that this will go around/through this stalemate, and I don’t think they currently have the softpower to stop that.
Greatly slowing AI in the US would require new federal laws meaning you need the support of the Senate, House, presidency, courts (to not rule unconstitutional) and bureaucracy (to actually enforce). If big tech can get at least one of these five power centers on its side, it can block meaningful change.
This seems like an important crux to me, because I don’t think greatly slowing AI in the US would require new federal laws. I think many of the actions I listed could be taken by government agencies who over-interpret their existing mandates given the right political and social climate. For instance, the eviction moratorium during COVID, obviously should have required congressional action, but was done by fiat through an over-interpretation of authority by an executive branch agency.
What they do or do not do seems mostly dictated by that socio-political climate, and by the courts, which means less veto points for industry.
You might be right, but let me make the case that AI won’t be slowed by the US government. Concentrated interests beat diffuse interests so an innovation that promises to slightly raise economic growth but harms, say, lawyers could be politically defeated by lawyers because they would care more about the innovation than anyone else. But, ignoring the possibility of unaligned AI, AI promises to give significant net economic benefit to nearly everyone, even those who jobs it threatens consequently there will not be coalitions to stop it, unless the dangers of unaligned AI become politically salient. The US, furthermore, will rightfully fear that if it slows the development of AI, it gives the lead to China, and this could be militarily, economically, and culturally devastating to US dominance. Finally, big tech has enormous political power with its campaign donations and control of social media and so politicians are unlikely to go against the will of big tech on something big tech cares a lot about.
Your last point seems like it agrees with point 7e becoming reality, where the US govt essentially allows existing big tech companies to pursue AI within certain ‘acceptable’ confines they think of at the time. In that case how much AI might be slowed is entirely dependent on how tight a leash they put them on. I think that scenario is actually quite likely given I am sure there is considerable overlap between US alphabet agencies and sectors of big tech.
I agree that competition with China is a plausible reason regulation won’t happen; that will certainly be one of the arguments advanced by industry and NatSec as to why it should not be throttled. However, I’m not sure, and currently don’t think it will, be stronger than the protectionist impulses,. Possibly it will exacerbate the “centralization” of AI dynamic that I listed in the ‘licensing’ bullet point, where large existing players receive money and de-facto license to operate in certain areas and then avoid others (as memeticimagery points out). So for instance we see more military style research, and GooAmBookSoft tacitly agree to not deploy AI that would replace lawyers.
To your point on big tech’s political influence; they have, in some absolute sense, a lot of political power, but relatively they are much weaker in political influence than peer industries. I think they’ve benefitted a lot from the R-D stalemate in DC; I’m positing that this will go around/through this stalemate, and I don’t think they currently have the softpower to stop that.
Greatly slowing AI in the US would require new federal laws meaning you need the support of the Senate, House, presidency, courts (to not rule unconstitutional) and bureaucracy (to actually enforce). If big tech can get at least one of these five power centers on its side, it can block meaningful change.
This seems like an important crux to me, because I don’t think greatly slowing AI in the US would require new federal laws. I think many of the actions I listed could be taken by government agencies who over-interpret their existing mandates given the right political and social climate. For instance, the eviction moratorium during COVID, obviously should have required congressional action, but was done by fiat through an over-interpretation of authority by an executive branch agency.
What they do or do not do seems mostly dictated by that socio-political climate, and by the courts, which means less veto points for industry.