If the AI is a maximizer rather than satisficer, then it will likely have a method for measuring the quality of it’s paths to achieving optimization that can be derived from it’s utility function and it’s model of the world. So the question isn’t whether it will be able to choose a path, but instead is: Is it more likely to choose a path where it sits around risking its own destruction or more likely to get started protecting things that share its goal (including itself) and acheiving some of its subgoals.
Also, if the AI is a satisficer then maybe that would increase its odds of sitting around waiting for continents to drift, but maybe not.
If the AI is a maximizer rather than satisficer, then it will likely have a method for measuring the quality of it’s paths to achieving optimization that can be derived from it’s utility function and it’s model of the world. So the question isn’t whether it will be able to choose a path, but instead is: Is it more likely to choose a path where it sits around risking its own destruction or more likely to get started protecting things that share its goal (including itself) and acheiving some of its subgoals.
Also, if the AI is a satisficer then maybe that would increase its odds of sitting around waiting for continents to drift, but maybe not.