I think I agree. This gives me the feeling that every time the discussion revolves around models, it is getting off the point. We can touch proof systems. We can’t touch models.
We can say that models are part of our pebblecraft...
The question, though, is whether we should trust particular parts of our pebblecraft. Should we prefer to work with first-order logic, or 2nd-order logic? Should we believe in such a thing as a standard model of the natural numbers? Should we trust proofs which make use of those concepts?
Feeling our way into a new formal system is part of our (messy, informal) pebblecraft. Sometimes people propose formal systems starting with their intended semantics (roughly, model theory). Sometimes people propose formal systems starting with introduction and elimination rules (roughly, proof theory). If the latter, people sometimes look for a semantics to go along with the syntax (and vice versa, of course).
For example, lambda calculus started with rules for performing beta reduction. In talking about lambda calculus, people refer to it as “functions from functions to functions”. Mathematicians knew that there was no nontrivial set S with a bijection to the set of all functions from S to S. So something else must be going on. Dana Scott invented domain theory partially to solve this problem. Domains have additional structure, such that some domains D do have bijections with the set of all structure-preserving maps from D to D. http://en.wikipedia.org/wiki/Domain_theory Similarly, modal logics started with just proof theory, and then Saul Kripke invented a semantics for them. http://en.wikipedia.org/wiki/Kripke_semantics
There’s always a syntactical model which you can construct mechanically from the proof theory (at least, that’s my understanding of the downward Lowenheim-Skolem argument) - but Scott and Kripke were not satisfied with that syntactical model, and went looking for something else, more insightful and perspicuous. Adding a “semantic” understanding can increase our (informal) confidence in a formal system—even a formal system that we were already working with. I’m not sure why it helps, but I think it’s part of our pebblecraft that it does help.
Perhaps adding a “semantic” understanding is like another bridge between informal and formal reasoning. These bridges are only partly formal—they’re also partly informal, the concepts and gesturing around the equations and proofs. Having one bridge is sufficient to go onto the Island of Formality, do some stuff, and come off again, but might be more convenient to have two, or three.
Perhaps adding a “semantic” understanding is like another bridge between informal and formal reasoning. These bridges are only partly formal—they’re also partly informal, the concepts and gesturing around the equations and proofs.
Nice. This seems like a surprisingly well-motivated way of reducing everything to physics: there’s just “syntactic” machinery made out of physics, and any semantics that might be attributed to parts of this machinery is merely a partially informal device (i.e. a collection of cognitive skills) that human mathematicians might use as an aid for reasoning about the machinery. Even when the machinery itself might in some sense be said to be semantically reasoning about something or other, this description of the machinery can be traced back to human mathematicians who use it as a partially informal device for understanding the machinery, and so it won’t strictly speaking be a property of the machinery itself.
In other words, in this view semantics is an informal art primarily concerned with advancement of human understanding, and it’s not fundamental to the operation of intelligence in general, it’s not needed for properly designing things, responding to observations or making decisions, any more than curiosity or visual thinking.
Ok, right. I think I didn’t fully appreciate your point before. So the fact that a particular many-sorted first-order logic with extra axioms is proof-theoretically equivalent to (a given proof system for) 2nd-order logic should make me stop asking whether we should build machines with one or the other, and start asking instead whether the many-sorted logic with extra axioms is any better than plain first-order logic (to which the answer appears to be yes, based on our admittedly 2nd-order reasoning). Right?
The prevalence of encodings means that we might not be able to “build machines with one or the other”. That is, given that there are basic alternatives A and B and A can encode B and B can encode A, it would take a technologist specializing in hair-splitting to say whether a machine that purportedly is using A is “really” using A at its lowest level or whether it is “actually” using B and only seems to use A via an encoding.
If in the immediate term you want to work with many-sorted first order logic, a reasonable first step would be to implement single-sorted first order logic, and a preprocessor to translate the many-sorted syntax into the single-sorted syntax. Then further development, optimizations and so on, might complicate the story, compacting the preprocessor and single-sorted engine into a single technology that you might reasonably say implements many-sorted logic “natively”.
The prevalence of encodings means that apparently different foundations don’t always make a difference.
In this case, that’s not true. The many-sorted logic, with axioms and all to emulate 2nd-order logic, has different properties than plain 1st-order logic (even though we may be emulating it in a plain 1st-order engine).
For example, in 2nd-order logic, we can quantify over any names we use. In 1st-order logic, this is not true: we can quantify over 1st-order entities, but we are unable to quantify over 2nd-order entities. So, we can have named entities (predicates and relations) which we are unable to quantify over.
One consequence of this is that, in 1st-order logic, we can never prove something non-tautological about a predicate or relation without first making some assumptions about that specific predicate or relation. Statements which share no predicates or relations are logically independent!
This limits the influence of concepts on one another, to some extent.
This sort of thing does not hold for 2nd-order logic, so its translation doesn’t hold for the translation of 2nd-order logic into 1st-order logic, either. (Basically, all statements in this many-sorted logic will use the membership predicate, which causes us to lose the guarantee of logical independence for other named items.)
So we have to be careful: an encoding of one thing into another thing doesn’t give us everything.
I think I agree. This gives me the feeling that every time the discussion revolves around models, it is getting off the point. We can touch proof systems. We can’t touch models.
We can say that models are part of our pebblecraft...
The question, though, is whether we should trust particular parts of our pebblecraft. Should we prefer to work with first-order logic, or 2nd-order logic? Should we believe in such a thing as a standard model of the natural numbers? Should we trust proofs which make use of those concepts?
Feeling our way into a new formal system is part of our (messy, informal) pebblecraft. Sometimes people propose formal systems starting with their intended semantics (roughly, model theory). Sometimes people propose formal systems starting with introduction and elimination rules (roughly, proof theory). If the latter, people sometimes look for a semantics to go along with the syntax (and vice versa, of course).
For example, lambda calculus started with rules for performing beta reduction. In talking about lambda calculus, people refer to it as “functions from functions to functions”. Mathematicians knew that there was no nontrivial set S with a bijection to the set of all functions from S to S. So something else must be going on. Dana Scott invented domain theory partially to solve this problem. Domains have additional structure, such that some domains D do have bijections with the set of all structure-preserving maps from D to D. http://en.wikipedia.org/wiki/Domain_theory Similarly, modal logics started with just proof theory, and then Saul Kripke invented a semantics for them. http://en.wikipedia.org/wiki/Kripke_semantics
There’s always a syntactical model which you can construct mechanically from the proof theory (at least, that’s my understanding of the downward Lowenheim-Skolem argument) - but Scott and Kripke were not satisfied with that syntactical model, and went looking for something else, more insightful and perspicuous. Adding a “semantic” understanding can increase our (informal) confidence in a formal system—even a formal system that we were already working with. I’m not sure why it helps, but I think it’s part of our pebblecraft that it does help.
Perhaps adding a “semantic” understanding is like another bridge between informal and formal reasoning. These bridges are only partly formal—they’re also partly informal, the concepts and gesturing around the equations and proofs. Having one bridge is sufficient to go onto the Island of Formality, do some stuff, and come off again, but might be more convenient to have two, or three.
Nice. This seems like a surprisingly well-motivated way of reducing everything to physics: there’s just “syntactic” machinery made out of physics, and any semantics that might be attributed to parts of this machinery is merely a partially informal device (i.e. a collection of cognitive skills) that human mathematicians might use as an aid for reasoning about the machinery. Even when the machinery itself might in some sense be said to be semantically reasoning about something or other, this description of the machinery can be traced back to human mathematicians who use it as a partially informal device for understanding the machinery, and so it won’t strictly speaking be a property of the machinery itself.
In other words, in this view semantics is an informal art primarily concerned with advancement of human understanding, and it’s not fundamental to the operation of intelligence in general, it’s not needed for properly designing things, responding to observations or making decisions, any more than curiosity or visual thinking.
Ok, right. I think I didn’t fully appreciate your point before. So the fact that a particular many-sorted first-order logic with extra axioms is proof-theoretically equivalent to (a given proof system for) 2nd-order logic should make me stop asking whether we should build machines with one or the other, and start asking instead whether the many-sorted logic with extra axioms is any better than plain first-order logic (to which the answer appears to be yes, based on our admittedly 2nd-order reasoning). Right?
The prevalence of encodings means that we might not be able to “build machines with one or the other”. That is, given that there are basic alternatives A and B and A can encode B and B can encode A, it would take a technologist specializing in hair-splitting to say whether a machine that purportedly is using A is “really” using A at its lowest level or whether it is “actually” using B and only seems to use A via an encoding.
If in the immediate term you want to work with many-sorted first order logic, a reasonable first step would be to implement single-sorted first order logic, and a preprocessor to translate the many-sorted syntax into the single-sorted syntax. Then further development, optimizations and so on, might complicate the story, compacting the preprocessor and single-sorted engine into a single technology that you might reasonably say implements many-sorted logic “natively”.
The prevalence of encodings means that apparently different foundations don’t always make a difference.
In this case, that’s not true. The many-sorted logic, with axioms and all to emulate 2nd-order logic, has different properties than plain 1st-order logic (even though we may be emulating it in a plain 1st-order engine).
For example, in 2nd-order logic, we can quantify over any names we use. In 1st-order logic, this is not true: we can quantify over 1st-order entities, but we are unable to quantify over 2nd-order entities. So, we can have named entities (predicates and relations) which we are unable to quantify over.
One consequence of this is that, in 1st-order logic, we can never prove something non-tautological about a predicate or relation without first making some assumptions about that specific predicate or relation. Statements which share no predicates or relations are logically independent!
This limits the influence of concepts on one another, to some extent.
This sort of thing does not hold for 2nd-order logic, so its translation doesn’t hold for the translation of 2nd-order logic into 1st-order logic, either. (Basically, all statements in this many-sorted logic will use the membership predicate, which causes us to lose the guarantee of logical independence for other named items.)
So we have to be careful: an encoding of one thing into another thing doesn’t give us everything.