I think your question is “why are abstractions useful?” Thinking more about it, let’s start with what you mean by an abstraction. From Wikipedia:
Abstraction in its main sense is a conceptual process where general rules and concepts are derived from the usage and classification of specific examples, literal (“real” or “concrete”) signifiers, first principles, or other methods.
“An abstraction” is the outcome of this process—a concept that acts as a common noun for all subordinate concepts, and connects any related concepts as a group, field, or category.
One can think of an abstraction as a category in itself. You start with one domain of the territory and create a surjective (but not injective, maps are lossy) morphism that preserves the rules (arrows) governing the domain in the codomain. Then you find another such category, with a different domain but the same codomain, and notice that there is a functor that preserves these morphisms. Maybe you can find more categories like that. You end up with a natural transformation between a these functors, a functor category. And that is your abstraction. You end up relating, say, electrostatics, fluid dynamics and Newtonian gravity through the codomain of the Poisson equation.
So, if one defines an abstraction as a natural transformation between categories of maps of the territory, the question becomes, why are these natural transformations useful?
To look deeper into this, I would note that, as often discussed here, every agent is an embedded agent, meaning the algorithm it runs is based on maps that are also a (tiny) part of the territory. For these maps to be useful, the territory must be mappable to begin with, i.e. the parts of the territory that are important for the agent’s survival must be predictable from the maps. Certainly there is a pressure to minimize the resources while maximizing the domains where these resources create useful maps, and one way to do it is to increase the “abstraction level”, and the natural transformations is pretty abstract. The broader the swaths of the territory one wants to map, the more pressure is on creating higher abstraction levels. I can imagine that even higher levels of abstractions are created, like lax natural transformations, with enough optimization pressure.
Abstractions are not unique to humans, or to consciousness. But when they bubble up to the conscious mind we call it mathematics. Even though our subconscious minds are experts at solving non-linear PDEs super quickly, say, when throwing a ball toward a target.
So, to answer a related question, the “unreasonable effectiveness” of mathematics is an artifact of optimization pressures on an embedded agent in a (partially) predictable universe.
I like this mode of thinking, and its angle is something I haven’t considered before. How would you interpret/dissolve the kind of question I posed in the answer to Pattern in the comments below?
Namely:
‘My point is the process of maths is (to a degree) invented or discovered, and under the invented hypothesis, where one would adhere to strict physicalist-style nominalism, the very act of predicting that the solutions to very real problems are dependent on abstract insight is literally incompatible with that position, to the point where seeing it done, even once, forces you to make some drastic ramifications to your own ontological model of the world.’
In addition, I am versed in rudimentary category theory, and wouldn’t a presupposition of having abstraction as natural transformation presuppose the existence of abstraction to define itself? This may be naive of me, or perhaps I have not grasped your subtlety, but using an abstract notion like natural transformation to define abstraction seems circular to me.
Sorry, my spam filter ate your reply notification :(
To “dissolve” the math invented/discovered question, it’s a false dichotomy, as constructing mathematical models, conscious or subconscious, is constructing the natural transformations between categories that allow high “compression ratio” of models of the world. They are as much “out there” in the world as the compression would allow. But they are not in some ideal Platonic world separate from the physical one. Not sure if this makes sense.
wouldn’t a presupposition of having abstraction as natural transformation presuppose the existence of abstraction to define itself?
There might be a circularity, but I do not see one. The chain of reasoning is, as above:
1. There is a somewhat predictable world out there
2. There are (surjective) maps from the world to its parts (models)
3. There are commonalities between such maps such that the procedure for constructing one map can be applied to another map.
4, These commonalities, which would correspond to natural transformations in the CT language, are a way to further compress the models.
5. To an embedded agent these commonalities feel like mathematical abstractions.
I do not believe I have used CT to define abstractions, only to meta-model them.
Don’t worry it no trouble :) Thank you, I see your reasoning more clearly now, and my thought of circularity is no longer there for me. Also I see the mental distinction between compression models and platonic abstracts.
I think your question is “why are abstractions useful?” Thinking more about it, let’s start with what you mean by an abstraction. From Wikipedia:
One can think of an abstraction as a category in itself. You start with one domain of the territory and create a surjective (but not injective, maps are lossy) morphism that preserves the rules (arrows) governing the domain in the codomain. Then you find another such category, with a different domain but the same codomain, and notice that there is a functor that preserves these morphisms. Maybe you can find more categories like that. You end up with a natural transformation between a these functors, a functor category. And that is your abstraction. You end up relating, say, electrostatics, fluid dynamics and Newtonian gravity through the codomain of the Poisson equation.
So, if one defines an abstraction as a natural transformation between categories of maps of the territory, the question becomes, why are these natural transformations useful?
To look deeper into this, I would note that, as often discussed here, every agent is an embedded agent, meaning the algorithm it runs is based on maps that are also a (tiny) part of the territory. For these maps to be useful, the territory must be mappable to begin with, i.e. the parts of the territory that are important for the agent’s survival must be predictable from the maps. Certainly there is a pressure to minimize the resources while maximizing the domains where these resources create useful maps, and one way to do it is to increase the “abstraction level”, and the natural transformations is pretty abstract. The broader the swaths of the territory one wants to map, the more pressure is on creating higher abstraction levels. I can imagine that even higher levels of abstractions are created, like lax natural transformations, with enough optimization pressure.
Abstractions are not unique to humans, or to consciousness. But when they bubble up to the conscious mind we call it mathematics. Even though our subconscious minds are experts at solving non-linear PDEs super quickly, say, when throwing a ball toward a target.
So, to answer a related question, the “unreasonable effectiveness” of mathematics is an artifact of optimization pressures on an embedded agent in a (partially) predictable universe.
I like this mode of thinking, and its angle is something I haven’t considered before. How would you interpret/dissolve the kind of question I posed in the answer to Pattern in the comments below?
Namely:
‘My point is the process of maths is (to a degree) invented or discovered, and under the invented hypothesis, where one would adhere to strict physicalist-style nominalism, the very act of predicting that the solutions to very real problems are dependent on abstract insight is literally incompatible with that position, to the point where seeing it done, even once, forces you to make some drastic ramifications to your own ontological model of the world.’
In addition, I am versed in rudimentary category theory, and wouldn’t a presupposition of having abstraction as natural transformation presuppose the existence of abstraction to define itself? This may be naive of me, or perhaps I have not grasped your subtlety, but using an abstract notion like natural transformation to define abstraction seems circular to me.
Sorry, my spam filter ate your reply notification :(
To “dissolve” the math invented/discovered question, it’s a false dichotomy, as constructing mathematical models, conscious or subconscious, is constructing the natural transformations between categories that allow high “compression ratio” of models of the world. They are as much “out there” in the world as the compression would allow. But they are not in some ideal Platonic world separate from the physical one. Not sure if this makes sense.
There might be a circularity, but I do not see one. The chain of reasoning is, as above:
1. There is a somewhat predictable world out there
2. There are (surjective) maps from the world to its parts (models)
3. There are commonalities between such maps such that the procedure for constructing one map can be applied to another map.
4, These commonalities, which would correspond to natural transformations in the CT language, are a way to further compress the models.
5. To an embedded agent these commonalities feel like mathematical abstractions.
I do not believe I have used CT to define abstractions, only to meta-model them.
Don’t worry it no trouble :) Thank you, I see your reasoning more clearly now, and my thought of circularity is no longer there for me. Also I see the mental distinction between compression models and platonic abstracts.