I agree that this is an important concept, or set of related concepts that covers many of the more directly physical abstractions. If something isn’t quantum field theory fundamental, and can be measured with physics equipment, there is a good chance it is one of these sorts of abstractions.
Of course, a lot of the work in what makes a sensible abstraction is determined by the amount of blurring, and the often implicit context.
For instance, take the abstraction “poisonous”. If the particular substance being described as poisonous is sitting in a box not doing anything, then we are talking about a counterfactual where a person eats the poison. Within that world, you are choosing a frame sufficiently zoomed in to tell if the hypothetical person was alive or dead, but not precise enough to tell which organs failed.
I think that different abstractions of objects are more useful in different circumstances. Consider a hard drive. In a context that involves moving large amounts of data, the main abstraction might be storage space. If you need to fit it in a bag, you might care more about size. If you need to dispose of it, you might care more about chemical composition and recyclability.
Consider some paper with ink on it. The induced abstractions framework can easily say that it weighs 72 grams, and has slightly more ink in the top right corner.
It has a harder time using descriptions like “surreal”, “incoherent”, “technical”, “humorous”, “unpredictable”, “accurate” ect.
Suppose the document is talking about some ancient historic event that has rather limited evidence remaining. The accuracy or inaccuracy of the document might be utterly lost in the mists of time, yet we still easily use “accurate” as an abstraction. That is, even a highly competent historian may be unable to cause any predictable physical difference in the future that depends on the accuracy of the document in question. Where as the number of letters in the document is easy to assertain and can influence the future if the historian wants it to.
As this stands, it is conceptually useful, but does not cover anything like all human abstractions.
Yeah, so, chaos in physical systems definitely does not get us all human abstractions. I do claim that the more general framework (i.e. summary of information relevant “far away”, for various notions of “far away”) does get us all human abstractions.
Once we get into more meta concepts like “accurate”, some additional (orthogonal) conceptually-tricky pieces become involved. For instance, “probability is in the mind” becomes highly relevant, models-of-models and models-of-map-territory-correspondence become relevant, models of other humans become relevant, the fact that things-in-my-models need not be things-in-the-world at all becomes relevant, etc. I do still think the information-relevant-far-away-framework applies for things like “accuracy”, but the underlying probabilistic model looks less like a physical simulation and more like something from Hofstadter.
“Unpredictable” is a good example of this. If I say “this tree is 8 meters tall”, then all the abstractions involved are really in my-map-of-the-tree. But if I say “this tree’s height next year is unpredictable”, then the abstractions involved are really in my-map-of-(the correspondence between my-map-of-the-tree and the tree). And the general concept of “unpredictability” is in my-map-of-(how correspondence between my map and the territory works in general). And if I mean that something is unpredictable by other people, or other things-in-the-world, then that drags in even more maps-of-maps.
But, once all the embedded maps-of-maps are sorted out, I still expect the concept of “unpredictable” to summarize all the information about some-class-of-maps-embedded-in-my-world-model which is relevant to the-things-which-those-maps-correspond-to-in-my-world-model.
I agree that this is an important concept, or set of related concepts that covers many of the more directly physical abstractions. If something isn’t quantum field theory fundamental, and can be measured with physics equipment, there is a good chance it is one of these sorts of abstractions.
Of course, a lot of the work in what makes a sensible abstraction is determined by the amount of blurring, and the often implicit context.
For instance, take the abstraction “poisonous”. If the particular substance being described as poisonous is sitting in a box not doing anything, then we are talking about a counterfactual where a person eats the poison. Within that world, you are choosing a frame sufficiently zoomed in to tell if the hypothetical person was alive or dead, but not precise enough to tell which organs failed.
I think that different abstractions of objects are more useful in different circumstances. Consider a hard drive. In a context that involves moving large amounts of data, the main abstraction might be storage space. If you need to fit it in a bag, you might care more about size. If you need to dispose of it, you might care more about chemical composition and recyclability.
Consider some paper with ink on it. The induced abstractions framework can easily say that it weighs 72 grams, and has slightly more ink in the top right corner.
It has a harder time using descriptions like “surreal”, “incoherent”, “technical”, “humorous”, “unpredictable”, “accurate” ect.
Suppose the document is talking about some ancient historic event that has rather limited evidence remaining. The accuracy or inaccuracy of the document might be utterly lost in the mists of time, yet we still easily use “accurate” as an abstraction. That is, even a highly competent historian may be unable to cause any predictable physical difference in the future that depends on the accuracy of the document in question. Where as the number of letters in the document is easy to assertain and can influence the future if the historian wants it to.
As this stands, it is conceptually useful, but does not cover anything like all human abstractions.
Yeah, so, chaos in physical systems definitely does not get us all human abstractions. I do claim that the more general framework (i.e. summary of information relevant “far away”, for various notions of “far away”) does get us all human abstractions.
Once we get into more meta concepts like “accurate”, some additional (orthogonal) conceptually-tricky pieces become involved. For instance, “probability is in the mind” becomes highly relevant, models-of-models and models-of-map-territory-correspondence become relevant, models of other humans become relevant, the fact that things-in-my-models need not be things-in-the-world at all becomes relevant, etc. I do still think the information-relevant-far-away-framework applies for things like “accuracy”, but the underlying probabilistic model looks less like a physical simulation and more like something from Hofstadter.
“Unpredictable” is a good example of this. If I say “this tree is 8 meters tall”, then all the abstractions involved are really in my-map-of-the-tree. But if I say “this tree’s height next year is unpredictable”, then the abstractions involved are really in my-map-of-(the correspondence between my-map-of-the-tree and the tree). And the general concept of “unpredictability” is in my-map-of-(how correspondence between my map and the territory works in general). And if I mean that something is unpredictable by other people, or other things-in-the-world, then that drags in even more maps-of-maps.
But, once all the embedded maps-of-maps are sorted out, I still expect the concept of “unpredictable” to summarize all the information about some-class-of-maps-embedded-in-my-world-model which is relevant to the-things-which-those-maps-correspond-to-in-my-world-model.