I think a warning shot in the real world would probably push out timelines a bit by squashing the most advanced projects, but then eventually more projects would come along (perhaps in other countries, or in secret) and do AGI anyways, so I’d be worried that we’d get longer “timelines” but a lower actual chance of getting aligned AI. For a warning shot to really be net-positive for humanity, it would need to achieve a very strong response, such as the international suppression of all AI research (not just cumbersome regulation on a few tech companies) with a ferocity that meets or exceeds how we currently handle the threat of nuclear proliferation.
An AI “warning shot” plays an important role in my finalist entry to the FLI’s $100K AI worldbuilding contest; but civilization only has a good response to the crisis because my story posits that other mechanisms (like wide adoption of “futarchy”-inspired governance) had already raised the ambient wisdom & competence level of civilization.
I think a warning shot in the real world would probably push out timelines a bit by squashing the most advanced projects, but then eventually more projects would come along (perhaps in other countries, or in secret) and do AGI anyways, so I’d be worried that we’d get longer “timelines” but a lower actual chance of getting aligned AI. For a warning shot to really be net-positive for humanity, it would need to achieve a very strong response, such as the international suppression of all AI research (not just cumbersome regulation on a few tech companies) with a ferocity that meets or exceeds how we currently handle the threat of nuclear proliferation.