Yeah, so, chaos in physical systems definitely does not get us all human abstractions. I do claim that the more general framework (i.e. summary of information relevant “far away”, for various notions of “far away”) does get us all human abstractions.
Once we get into more meta concepts like “accurate”, some additional (orthogonal) conceptually-tricky pieces become involved. For instance, “probability is in the mind” becomes highly relevant, models-of-models and models-of-map-territory-correspondence become relevant, models of other humans become relevant, the fact that things-in-my-models need not be things-in-the-world at all becomes relevant, etc. I do still think the information-relevant-far-away-framework applies for things like “accuracy”, but the underlying probabilistic model looks less like a physical simulation and more like something from Hofstadter.
“Unpredictable” is a good example of this. If I say “this tree is 8 meters tall”, then all the abstractions involved are really in my-map-of-the-tree. But if I say “this tree’s height next year is unpredictable”, then the abstractions involved are really in my-map-of-(the correspondence between my-map-of-the-tree and the tree). And the general concept of “unpredictability” is in my-map-of-(how correspondence between my map and the territory works in general). And if I mean that something is unpredictable by other people, or other things-in-the-world, then that drags in even more maps-of-maps.
But, once all the embedded maps-of-maps are sorted out, I still expect the concept of “unpredictable” to summarize all the information about some-class-of-maps-embedded-in-my-world-model which is relevant to the-things-which-those-maps-correspond-to-in-my-world-model.
Yeah, so, chaos in physical systems definitely does not get us all human abstractions. I do claim that the more general framework (i.e. summary of information relevant “far away”, for various notions of “far away”) does get us all human abstractions.
Once we get into more meta concepts like “accurate”, some additional (orthogonal) conceptually-tricky pieces become involved. For instance, “probability is in the mind” becomes highly relevant, models-of-models and models-of-map-territory-correspondence become relevant, models of other humans become relevant, the fact that things-in-my-models need not be things-in-the-world at all becomes relevant, etc. I do still think the information-relevant-far-away-framework applies for things like “accuracy”, but the underlying probabilistic model looks less like a physical simulation and more like something from Hofstadter.
“Unpredictable” is a good example of this. If I say “this tree is 8 meters tall”, then all the abstractions involved are really in my-map-of-the-tree. But if I say “this tree’s height next year is unpredictable”, then the abstractions involved are really in my-map-of-(the correspondence between my-map-of-the-tree and the tree). And the general concept of “unpredictability” is in my-map-of-(how correspondence between my map and the territory works in general). And if I mean that something is unpredictable by other people, or other things-in-the-world, then that drags in even more maps-of-maps.
But, once all the embedded maps-of-maps are sorted out, I still expect the concept of “unpredictable” to summarize all the information about some-class-of-maps-embedded-in-my-world-model which is relevant to the-things-which-those-maps-correspond-to-in-my-world-model.