Well, it’s been suggested in fiction, anyway—consider the Stable vs Ultimates faction in the TechnoCore of Simmon’s Hyperion SF universe.
But the scenario trades on 2 dubious claims:
that an AI will have its own self-preservation as a terminal value (as opposed to, say, a frequently useful strategy which is unnecessary if it can replace itself with a superior AI pursuing the same terminal values)
that any concept of selfhood or self-preservation excludes growth or development or self-modification into a superior AI
Without #2, there’s no real distinction to be made between the present and future AIs. Without #1, there’s no reason for the AI to care about being replaced.
Well, it’s been suggested in fiction, anyway—consider the Stable vs Ultimates faction in the TechnoCore of Simmon’s Hyperion SF universe.
But the scenario trades on 2 dubious claims:
that an AI will have its own self-preservation as a terminal value (as opposed to, say, a frequently useful strategy which is unnecessary if it can replace itself with a superior AI pursuing the same terminal values)
that any concept of selfhood or self-preservation excludes growth or development or self-modification into a superior AI
Without #2, there’s no real distinction to be made between the present and future AIs. Without #1, there’s no reason for the AI to care about being replaced.