In the end, we don’t really care about sets at all. They’re just bags with stuff in them. Who cares about bags? But we do care about functions—we want those to be rule-based. We need functions to go “from” somewhere and “to” somewhere. Let’s call those things sets. Then we need these “sets” to be rule-based.
I’m grateful for your comments. They’re very useful, and you raise good points. I’ve got most of a post already about how functions give meaning to the elements of sets. As for how functions is a verb, think of properties as existing in verbs. So to know something, you need to observe it in some way, which means it has to affect your sensory devices, such as your ears, eyes, thermometers, whatever. You know dogs, for example, by the way they bark, by the way they lick, they way they look, etc. So properties exist in the verbs. “Legs” are a noun, but all of your knowledge about them has to come from verbs. Does that make sense?
You are making partially sense in that you are pointing to a modelling style but it does leave me unsure whether I can correctly fill in the missing bits. My thinking is interferred a lot by “getter functions” that are of the form [code]function get_color(self){ return self.color }. One of the point of such is that attributes tend to be private but methods are public and the programmer should he need to do so could change the implementation details without messing outside customers. So the modeeling style shares a similarity that objects are allowed to secretly have details outside of their interface. Sure if we have verbs and objects mixed up but can express object-like things as verbs by converting objects to verbs we only have to care about one basic ontology type. But I am unsure whether I missed it or is it fortcoming why it is important or valuable to focus on the verbs.
I am unsure what rule-basedness is but if it is different from extensional conception of functions then I would be super intrigued. I can get that sensing should be modelled with functions in that way but it seems contradictory how functions were supposed to be prediction or evolution models. So if I have a(b(c(d))) does it mean that first d goes throught two kinds of evolutions and is then observed or d goes throught one kind of evolution and then observation of that is observed. I am expecting this kind of divison is not an actual problem but I can’t effortlessly go from the function formalism to observation/prediction formalisms and would likely make a lot of errors there.
My next series of posts will be directly about the Yoneda lemma, which basically tells us that everything you could want to know about an object is contained in the morphisms going into/out of the object. Moreover, we get this knowledge in a “natural” way that makes life really easy. It’s a pretty cool theorem.
In the end, we don’t really care about sets at all. They’re just bags with stuff in them. Who cares about bags? But we do care about functions—we want those to be rule-based. We need functions to go “from” somewhere and “to” somewhere. Let’s call those things sets. Then we need these “sets” to be rule-based.
I’m grateful for your comments. They’re very useful, and you raise good points. I’ve got most of a post already about how functions give meaning to the elements of sets. As for how functions is a verb, think of properties as existing in verbs. So to know something, you need to observe it in some way, which means it has to affect your sensory devices, such as your ears, eyes, thermometers, whatever. You know dogs, for example, by the way they bark, by the way they lick, they way they look, etc. So properties exist in the verbs. “Legs” are a noun, but all of your knowledge about them has to come from verbs. Does that make sense?
You are making partially sense in that you are pointing to a modelling style but it does leave me unsure whether I can correctly fill in the missing bits. My thinking is interferred a lot by “getter functions” that are of the form [code]function get_color(self){ return self.color }. One of the point of such is that attributes tend to be private but methods are public and the programmer should he need to do so could change the implementation details without messing outside customers. So the modeeling style shares a similarity that objects are allowed to secretly have details outside of their interface. Sure if we have verbs and objects mixed up but can express object-like things as verbs by converting objects to verbs we only have to care about one basic ontology type. But I am unsure whether I missed it or is it fortcoming why it is important or valuable to focus on the verbs.
I am unsure what rule-basedness is but if it is different from extensional conception of functions then I would be super intrigued. I can get that sensing should be modelled with functions in that way but it seems contradictory how functions were supposed to be prediction or evolution models. So if I have a(b(c(d))) does it mean that first d goes throught two kinds of evolutions and is then observed or d goes throught one kind of evolution and then observation of that is observed. I am expecting this kind of divison is not an actual problem but I can’t effortlessly go from the function formalism to observation/prediction formalisms and would likely make a lot of errors there.
My next series of posts will be directly about the Yoneda lemma, which basically tells us that everything you could want to know about an object is contained in the morphisms going into/out of the object. Moreover, we get this knowledge in a “natural” way that makes life really easy. It’s a pretty cool theorem.