I do see the inverse side: a single fixed goal would be something in the mind that’s not open to critique, hence not truly generally intelligent from a Deutschian perspective (I would guess; I don’t actually know his work well).
To expand on the “not truly generally intelligent” point: one way this could look is if the goal included some tacit assumptions about the universe that turned out later not to be true in general—e.g. if the agent’s goal was something involving increasingly long-range simultaneous coordination, before the discovery of relativity—and if the goal were really unchangeable, then it would bar or at least complicate the agent’s updating to a new, truer ontology.
I do see the inverse side: a single fixed goal would be something in the mind that’s not open to critique, hence not truly generally intelligent from a Deutschian perspective (I would guess; I don’t actually know his work well).
To expand on the “not truly generally intelligent” point: one way this could look is if the goal included some tacit assumptions about the universe that turned out later not to be true in general—e.g. if the agent’s goal was something involving increasingly long-range simultaneous coordination, before the discovery of relativity—and if the goal were really unchangeable, then it would bar or at least complicate the agent’s updating to a new, truer ontology.