Sorry, my spam filter ate your reply notification :(
To “dissolve” the math invented/discovered question, it’s a false dichotomy, as constructing mathematical models, conscious or subconscious, is constructing the natural transformations between categories that allow high “compression ratio” of models of the world. They are as much “out there” in the world as the compression would allow. But they are not in some ideal Platonic world separate from the physical one. Not sure if this makes sense.
wouldn’t a presupposition of having abstraction as natural transformation presuppose the existence of abstraction to define itself?
There might be a circularity, but I do not see one. The chain of reasoning is, as above:
1. There is a somewhat predictable world out there
2. There are (surjective) maps from the world to its parts (models)
3. There are commonalities between such maps such that the procedure for constructing one map can be applied to another map.
4, These commonalities, which would correspond to natural transformations in the CT language, are a way to further compress the models.
5. To an embedded agent these commonalities feel like mathematical abstractions.
I do not believe I have used CT to define abstractions, only to meta-model them.
Don’t worry it no trouble :) Thank you, I see your reasoning more clearly now, and my thought of circularity is no longer there for me. Also I see the mental distinction between compression models and platonic abstracts.
Sorry, my spam filter ate your reply notification :(
To “dissolve” the math invented/discovered question, it’s a false dichotomy, as constructing mathematical models, conscious or subconscious, is constructing the natural transformations between categories that allow high “compression ratio” of models of the world. They are as much “out there” in the world as the compression would allow. But they are not in some ideal Platonic world separate from the physical one. Not sure if this makes sense.
There might be a circularity, but I do not see one. The chain of reasoning is, as above:
1. There is a somewhat predictable world out there
2. There are (surjective) maps from the world to its parts (models)
3. There are commonalities between such maps such that the procedure for constructing one map can be applied to another map.
4, These commonalities, which would correspond to natural transformations in the CT language, are a way to further compress the models.
5. To an embedded agent these commonalities feel like mathematical abstractions.
I do not believe I have used CT to define abstractions, only to meta-model them.
Don’t worry it no trouble :) Thank you, I see your reasoning more clearly now, and my thought of circularity is no longer there for me. Also I see the mental distinction between compression models and platonic abstracts.