“Um—okay, look, putting aside the obvious objection that any sufficiently powerful intelligence will be able to model itself—”
Lob’s Sentence contains an exact recipe for a copy of itself, including the recipe for the recipe; it has a perfect self-model. Does that make it sentient?
Eliezer has anticipated your argument:
I think it’s relevant that the self-model for an AI would change as the AI changes.