It captures a lot of unimportant information, which makes the models more unweildy. Really, information is a cost: the point of a map is not to faithfully reflect the territory, because that would make it really expensive to read the map. Rather, the point of a map is to give the simplest way of thinking about the most important features of the territory. For instance, literal maps often use flat colors (low information!) to represent different kinds of terrain (important factors!).
Yeah, this is probably one of the biggest differences that comes up between idealized notions of computation/intelligence like AIXI (at the weak end) and The Universal Hypercomputer model in their paper The Universal Hypercomputer (at the strong end) and real agents because of computation costs.
For idealized agents, they can often treat their maps as equivalent to a given territory, at least with full simulation/computation, while real agents must have differences between the map and the territory they’re trying to model, so the saying “the map is not the territory” is true for us.
Also, about this point in particular:
Yeah, this is probably one of the biggest differences that comes up between idealized notions of computation/intelligence like AIXI (at the weak end) and The Universal Hypercomputer model in their paper The Universal Hypercomputer (at the strong end) and real agents because of computation costs.
For idealized agents, they can often treat their maps as equivalent to a given territory, at least with full simulation/computation, while real agents must have differences between the map and the territory they’re trying to model, so the saying “the map is not the territory” is true for us.