Two sheep plus three sheep equals five sheep.
Two apples plus three apples equals five apples.
Two Discrete ObjecTs plus three Discrete ObjecTs equals five Discrete ObjecTs.
Arithmetic is a formal system, consisting of a syntax and semantics. The formal syntax specifies which statements are grammatical: “2 + 3 = 5” is fine, while “2 3 5 + =” is meaningless. The formal semantics provides a mapping from grammatical statements to truth values: “2 + 3 = 5” is true, while “2 + 3 = 6″ is false. This mapping relies on axioms; that is, when we say “statement X in formal system Y is true”, we mean X is consistent with the axioms of Y.
Again, this is strictly formal, and has no inherent relationship to the world of physical objects. However, we can model the world of physical objects with arithmetic by creating a correspondence between the formal object “1” and any real-world object. Then, we can evaluate the predictive power of our model.
That is, we can take two sheep and three sheep. We can model these as “2” and “3″ respectively; when we apply the formal rules of our model, we conclude that there are “5”. Then we count up the sheep in the real world and find that there are five of them. Thus, we find that our arithmetic model has excellent predictive power. More colloquially, we find that our model is “true”. But in order for our model to be “true” in the “predictive power” sense, the formal system (contained in the map) must be grounded in the territory. Without this grounding, sentences in the formal system could be “true” according to the formal semantics of that system, but they won’t be “true” in the sense that they say something accurate about the territory.
Of course, the division of the world into discrete objects like sheep is part of the map rather than the territory...
This exposition would be much clearer if you reduced / expanded the concepts of “create correspondences between formal and real objects” and “ground a formal system in the territory”. Those look like they’re hiding important mental algorithms which the original post was trying to get at (Not the dot combining one. Maybe the one which attributes a common cause, a latent mathematical truth variable to explain the similar results of rocks and sheep gathering?). Do those phrases, “make correspondences” and “ground a system”, mean that we can stop talking about formal objects and instead talk about the behavior of physical circuits which compute all those formal things, like which strings are well formed, what the result of a grammatical transformation will be, and which truth values get mapped to formulas?
As it stands, I don’t see your point. You talk about a model which is true but doesn’t “say something” about reality. You don’t address whether things in reality “say something” about each other prior to humans showing up with their beliefs that reflect reality, i.e. whether there are things in the world that look like computations, things which have mutually informative behavior that isn’t a result of intermediary causal chains of physics-stuff jiggling each other.
Or maybe you did a little bit when you called sheep a map-level distinction? Physics clearly doesn’t act directly on sheep, but that doesn’t mean sheep can’t be a substrate for computing. Sheep are still there. It is a fact of reality that some fields contain hooved clumps of meat, even if we have to phrase that fact in terms of the response of visual-field segmenting and object-permanence-establishing neurons in the brain a person looking out upon the field.
Two sheep plus three sheep equals five sheep. Two apples plus three apples equals five apples. Two Discrete ObjecTs plus three Discrete ObjecTs equals five Discrete ObjecTs.
Arithmetic is a formal system, consisting of a syntax and semantics. The formal syntax specifies which statements are grammatical: “2 + 3 = 5” is fine, while “2 3 5 + =” is meaningless. The formal semantics provides a mapping from grammatical statements to truth values: “2 + 3 = 5” is true, while “2 + 3 = 6″ is false. This mapping relies on axioms; that is, when we say “statement X in formal system Y is true”, we mean X is consistent with the axioms of Y.
Again, this is strictly formal, and has no inherent relationship to the world of physical objects. However, we can model the world of physical objects with arithmetic by creating a correspondence between the formal object “1” and any real-world object. Then, we can evaluate the predictive power of our model.
That is, we can take two sheep and three sheep. We can model these as “2” and “3″ respectively; when we apply the formal rules of our model, we conclude that there are “5”. Then we count up the sheep in the real world and find that there are five of them. Thus, we find that our arithmetic model has excellent predictive power. More colloquially, we find that our model is “true”. But in order for our model to be “true” in the “predictive power” sense, the formal system (contained in the map) must be grounded in the territory. Without this grounding, sentences in the formal system could be “true” according to the formal semantics of that system, but they won’t be “true” in the sense that they say something accurate about the territory.
Of course, the division of the world into discrete objects like sheep is part of the map rather than the territory...
By this definition, both the continuum hypothesis and the negation of the continuum hypothesis are true in ZFC
This exposition would be much clearer if you reduced / expanded the concepts of “create correspondences between formal and real objects” and “ground a formal system in the territory”. Those look like they’re hiding important mental algorithms which the original post was trying to get at (Not the dot combining one. Maybe the one which attributes a common cause, a latent mathematical truth variable to explain the similar results of rocks and sheep gathering?). Do those phrases, “make correspondences” and “ground a system”, mean that we can stop talking about formal objects and instead talk about the behavior of physical circuits which compute all those formal things, like which strings are well formed, what the result of a grammatical transformation will be, and which truth values get mapped to formulas?
As it stands, I don’t see your point. You talk about a model which is true but doesn’t “say something” about reality. You don’t address whether things in reality “say something” about each other prior to humans showing up with their beliefs that reflect reality, i.e. whether there are things in the world that look like computations, things which have mutually informative behavior that isn’t a result of intermediary causal chains of physics-stuff jiggling each other.
Or maybe you did a little bit when you called sheep a map-level distinction? Physics clearly doesn’t act directly on sheep, but that doesn’t mean sheep can’t be a substrate for computing. Sheep are still there. It is a fact of reality that some fields contain hooved clumps of meat, even if we have to phrase that fact in terms of the response of visual-field segmenting and object-permanence-establishing neurons in the brain a person looking out upon the field.
I just wish I knew what you were getting at.