It’s arguable from a negative utilitarian maladaptive point of view, sure. I find the argument wholly unconvincing.
How we get to our deaths matters, whether we have the ability to live our lives in a way we find fulfilling matters, and the continuation of our species matters. All are threatened by AGI.
I actually agree entirely. I just don’t think that we need to explore those x-risks by exposing ourselves to them. I think we’ve already advanced AI enough to start understanding and thinking about those x-risks, and an indefinite (perhaps not permanent) pause in development will enable us to get our bearings.
Say what you need to say now to get away from the potential lion. Then back at the campfire, talk it through.