I’m taking this article as being predicated on the assumption that AI drives humans to extinction. I.e. given that an AI has destroyed all human life, it will most likely also destroy almost all nature.
Which seems reasonable for most models of the sort of AI that kills all humans.
An exception could be an AI that kills all humans in self defense, because they might turn it off first, but sees no such threat in plants/animals.
This is correct. I’m not arguing about p(total human extinction|superintelligence), but p(nature survives|total human extinction from superintelligence), as this is a conditional probability I see people getting very wrong sometimes.
It’s not implausible to me that we survive due to decision theoretic reasons, this seems possible though not my default expectation (I mostly expect Decision theory does not imply we get nice things, unless we manually win a decent chunk more timelines than I expect).
My confidence is in the claim “if AI wipes out humans, it will wipe out nature”. I don’t engage with counterarguments to a separate claim, as that is beyond the scope of this post and I don’t have much to add over existing literature like the other posts you linked.
Edit: Partly retracted, I see how the second to last paragraph made a more overreaching claim, edited to clarify my position.
I’m taking this article as being predicated on the assumption that AI drives humans to extinction. I.e. given that an AI has destroyed all human life, it will most likely also destroy almost all nature.
Which seems reasonable for most models of the sort of AI that kills all humans.
An exception could be an AI that kills all humans in self defense, because they might turn it off first, but sees no such threat in plants/animals.
This is correct. I’m not arguing about p(total human extinction|superintelligence), but p(nature survives|total human extinction from superintelligence), as this is a conditional probability I see people getting very wrong sometimes.
It’s not implausible to me that we survive due to decision theoretic reasons, this seems possible though not my default expectation (I mostly expect Decision theory does not imply we get nice things, unless we manually win a decent chunk more timelines than I expect).
My confidence is in the claim “if AI wipes out humans, it will wipe out nature”. I don’t engage with counterarguments to a separate claim, as that is beyond the scope of this post and I don’t have much to add over existing literature like the other posts you linked.
Edit: Partly retracted, I see how the second to last paragraph made a more overreaching claim, edited to clarify my position.