IMO, my response is that I expect several of the prerequisites for the scenario to occur to happen well past the time of perils, in that either we go extinct or we have muddled through and successfully aligned ASI, in particular I expect formal verification of neural networks to be extremely hard, and if we could do this in a scalable way, this would allow you to do much more deep interpretability/​safety properties, such that it would make AI safety a whole lot easier.
IMO, my response is that I expect several of the prerequisites for the scenario to occur to happen well past the time of perils, in that either we go extinct or we have muddled through and successfully aligned ASI, in particular I expect formal verification of neural networks to be extremely hard, and if we could do this in a scalable way, this would allow you to do much more deep interpretability/​safety properties, such that it would make AI safety a whole lot easier.