I suspect if an AI has some particular goal that requires destroying humanity and manufacturing things in the aftermath, and is intelligent and capable enough to actually do it, then it will consider these things in advance, and set up whatever initial automation it needs to achieve this before destroying humanity. AI with enough planning capabilities to e.g. design a bioweapon or incite a nuclear war would probably be able to think ahead about what to do afterwards, would have its own contingencies in place, and would not need to rely on whatever tools humanity happens to leave lying around when it is gone.
I suspect if an AI has some particular goal that requires destroying humanity and manufacturing things in the aftermath, and is intelligent and capable enough to actually do it, then it will consider these things in advance, and set up whatever initial automation it needs to achieve this before destroying humanity. AI with enough planning capabilities to e.g. design a bioweapon or incite a nuclear war would probably be able to think ahead about what to do afterwards, would have its own contingencies in place, and would not need to rely on whatever tools humanity happens to leave lying around when it is gone.