I expect this to be a good but not perfect analogy to how an AI related catastrophic event could trigger political change. My understanding is that a crucial part of public discourse was, as other commenters allude to, a perceived taboo against being anti-war, such that even center-left reputable mainstream sources did not in fact doubt the evidence for Iraq’s alleged WMD. Likely a crucial component is a sort of moral dimension to the debate (“are you suggesting we should not do anything about 9/11?”) that prevents people from speaking out.
I expect an AI related fiasco to have less of this moral load, and instead think that scenarios like the Whenzhou train accident or the bridge collapse in Italy 2018 are more analogous in that the catastrophe is a clear accident, that while perhaps caused by recklessness was not caused by a clearly evil entity. The wiki article on the bridge collapse makes it sound like in the aftermath there was a lot of blaming going on, but no mention of any effort to invest more into infrastructure.
Great study!
A strong motivating aspect of the study is measuring AI R&D accleration. I am somewhat wary of using this methodology to find negative evidence for this kind of acceleration happening at labs:
I must believe that using AI agents productively is a skill question, despite the graphs in the paper showing no learning effects. One kind of company filled with people knowing lots about how to prompt AIs, and their limitations, are AI labs. Even if most developers
The mean speedup/slowdown can be a difficult metric: the heavy tail of research impact + feedback loops around AI R&D make it so that just one subgroup with high positive speedup could have a big impact.
Reading a recent account of an employee who left OpenAI, the dev experience also sounds pretty dissimilar. Summarizing, OAI repos are large (matches the study setting), but people don’t have a great understanding of the full repo (since it’s a large monorepo+ a lot of new people joining) and there do not seem to be uniform code guidelines.