You talk about “we” surviving/not AGI. I suspect there is a lot of expected death from early AGI where there are survivors, in the time after first AGI but before there is superintelligence with enough hardware and free rein. With sufficient effectiveness of prosaic alignment to enable dangerous non-superintelligent systems that avoid developing stronger systems, and early scares, active development towards superintelligence might slow down for many years. In the interim, we only get the gradually escalating danger from existing systems (mostly biorisk, but also economic and global security upheaval) without a resolution either way.
You talk about “we” surviving/not AGI. I suspect there is a lot of expected death from early AGI where there are survivors, in the time after first AGI but before there is superintelligence with enough hardware and free rein. With sufficient effectiveness of prosaic alignment to enable dangerous non-superintelligent systems that avoid developing stronger systems, and early scares, active development towards superintelligence might slow down for many years. In the interim, we only get the gradually escalating danger from existing systems (mostly biorisk, but also economic and global security upheaval) without a resolution either way.