This seems right (though I have some apprehension around talking about “parts” of an AI). From the perspective of proving a theorem, it seems like you need some sort of assumption on what the rest of the AI looks like, so that you can say something like “the goal-directed part will outcompete the other parts”. Though perhaps you could try defining goal-directed behavior as the sort of behavior that tends to grow and outcompete things—this could be a useful definition? I’m not sure.
This seems right (though I have some apprehension around talking about “parts” of an AI). From the perspective of proving a theorem, it seems like you need some sort of assumption on what the rest of the AI looks like, so that you can say something like “the goal-directed part will outcompete the other parts”. Though perhaps you could try defining goal-directed behavior as the sort of behavior that tends to grow and outcompete things—this could be a useful definition? I’m not sure.