FWIW I still stand behind the arguments that I made in that old thread with Paul. I do think the game-theoretical considerations for AI maybe allowing some humans to survive are stronger, but they also feel loopy and like they depend on how good of a job we do on alignment, so I usually like to bracket them in conversations like this (though I agree it’s relevant for the prediction of whether AI will kill literally everyone).
FWIW I still stand behind the arguments that I made in that old thread with Paul. I do think the game-theoretical considerations for AI maybe allowing some humans to survive are stronger, but they also feel loopy and like they depend on how good of a job we do on alignment, so I usually like to bracket them in conversations like this (though I agree it’s relevant for the prediction of whether AI will kill literally everyone).
[minor]
Worth noting that they might only depend to some extent as mediated by the correlation between our success and alien’s success.
High competent aliens which care a bunch about killing a bunch of existing beings seems pretty plausible to me.