If your agent operates in an environment such that your sense data contains errors or such that the world that spawns that sense data isn’t deterministic, at least not on a level that your sense data can pick up—both of which cannot be avoided—then perfect predictability is out of the question anyways.
The problem then shifts to “how much error or fuzziness of the sense data or the underlying world is allowed”, at which point there’s a trade-off between “short and enourmously more preferred model that predicts more errors/fuzziness” versus “longer and enourmously less preferred model that predicts less errors/fuzziness”.
This is as far as I know not an often discussed topic, at least not around here, probably because people haven’t yet hooked up any computable version of AIXI with sensors that are relevantly imperfect and that are probing a truly probabilistic environment. Those concerns do not really apply to learning PAC-Man.
If your agent operates in an environment such that your sense data contains errors or such that the world that spawns that sense data isn’t deterministic, at least not on a level that your sense data can pick up—both of which cannot be avoided—then perfect predictability is out of the question anyways.
The problem then shifts to “how much error or fuzziness of the sense data or the underlying world is allowed”, at which point there’s a trade-off between “short and enourmously more preferred model that predicts more errors/fuzziness” versus “longer and enourmously less preferred model that predicts less errors/fuzziness”.
This is as far as I know not an often discussed topic, at least not around here, probably because people haven’t yet hooked up any computable version of AIXI with sensors that are relevantly imperfect and that are probing a truly probabilistic environment. Those concerns do not really apply to learning PAC-Man.