I agree with almost all of this analysis, but I’m surprised at any suggestion that government shouldn’t be encouraged to pay more attention to AI.
The common tendency in the tech sphere to downplay government involvement seems maladaptive when applied to AGI. It was a useful instinct when resisting regulation that could stifle harmless innovation; it is an unhelpful one when applied to the dangerous development now taking place.
AGI seems like a scenario that governmental bodies are better calibrated towards handling than corporate ones, as governments are at least partially incentivised to account for the public good. Meanwhile, the entire history of the corporation is one of disregard for negative externalities. From the East India Company’s indifference to millions of deaths in Bengal to modern tobacco and fossil fuel companies, corporations have repeatedly failed to safeguard against even the most catastrophic consequences. This is less due to some moral failing than the fact that they just lack the programming to make them do so. Good actors may start to develop some of this programming in their internal governance structures, but the record overwhelmingly suggests that this will neither be wholly reliable nor see adoption en masse.
Currently, AI development looks likely to continue in a low-oversight environment where companies face little external pressure to weigh up the existential risks they are hurtling us towards. Here, like in other sectors, the government has a part to play in encouraging an environment where AI risk is highly salient to the groups developing them. There is a general consensus that we are stepping down a dark path, and a healthy fear of the consequences of each step is beneficial across each institution that can influence our future here.
I think there’s truth to what you’re saying, but I think the downsides of premature government involvement are big too. I discuss this more in a followup post.
I agree with almost all of this analysis, but I’m surprised at any suggestion that government shouldn’t be encouraged to pay more attention to AI.
The common tendency in the tech sphere to downplay government involvement seems maladaptive when applied to AGI. It was a useful instinct when resisting regulation that could stifle harmless innovation; it is an unhelpful one when applied to the dangerous development now taking place.
AGI seems like a scenario that governmental bodies are better calibrated towards handling than corporate ones, as governments are at least partially incentivised to account for the public good. Meanwhile, the entire history of the corporation is one of disregard for negative externalities. From the East India Company’s indifference to millions of deaths in Bengal to modern tobacco and fossil fuel companies, corporations have repeatedly failed to safeguard against even the most catastrophic consequences. This is less due to some moral failing than the fact that they just lack the programming to make them do so. Good actors may start to develop some of this programming in their internal governance structures, but the record overwhelmingly suggests that this will neither be wholly reliable nor see adoption en masse.
Currently, AI development looks likely to continue in a low-oversight environment where companies face little external pressure to weigh up the existential risks they are hurtling us towards. Here, like in other sectors, the government has a part to play in encouraging an environment where AI risk is highly salient to the groups developing them. There is a general consensus that we are stepping down a dark path, and a healthy fear of the consequences of each step is beneficial across each institution that can influence our future here.
I think there’s truth to what you’re saying, but I think the downsides of premature government involvement are big too. I discuss this more in a followup post.
My view on this is that politics/government work is both very important and also wildly intractable, for both fundamental and practical reasons.
It’s sad to say that a very important cause should ultimately be left on the table, but unfortunately I think this is the case.