This is a misreading of traditional utility theory and of ontology.
When you change your ontology, concepts like “cat” or “vase” don’t become meaningless, they just get translated.
Also, you know that AIXI’s reward function is defined on its percepts and not on world states, right? It seems a bit tautological to say that its utility is local, then.
When you change your ontology, concepts like “cat” or “vase” don’t become meaningless, they just get translated.
That’s a big part of my point.
Also, you know that AIXI’s reward function is defined on its percepts and not on world states, right? It seems a bit tautological to say that its utility is local, then.
This is a misreading of traditional utility theory and of ontology.
When you change your ontology, concepts like “cat” or “vase” don’t become meaningless, they just get translated.
Also, you know that AIXI’s reward function is defined on its percepts and not on world states, right? It seems a bit tautological to say that its utility is local, then.
This seems like a misreading of my post.
That’s a big part of my point.
Wait, who’s talking about AIXI?
AIXI is relevant because it shows that world state is not the dominant view in AI research.
But world state is still well-defined even with ontological changes because there is no ontological change without a translation.
Perhaps I would say that “impact” isn’t very important, then, except if you define it as a utility delta.