I know that material, but couldn’t figure out how your informal descriptions map to it in every case. (When following a proof, I like to make sure I understand every step, and fix problems before proceeding). If this isn’t intended as new, then I won’t sweat it.
The overall message is not really new technically, but its philosophical implications are somewhat surprising even for mathematicians. In general, looking at the same thing from different angles to helps to get acquire more thorough understanding even if it does not necessarily provide a clear short term benefit.
A few years ago, I chatted with a few very good mathematicians who were not aware (relatively straightforward) equivalence between the theorems of Turing and Goedel, but could see it within a few minutes and had no problem grasping the the inherent, natural connection between logical systems and program evaluating algorithms.
I think, that there is quite a bit of mysticism and misconception regarding mathematical logic, set theory, etc. by the general public. I think that it is valuable to put them in a context that clarifies their real, inherently algorithmic nature. It may be also helpful for cleaning up one’s own world view.
I am sorry if my account was unclear at points and I am glad to provide clarification to any unclear points.
In order to define the semantics, there need to be map to something else formally checkable, ruled by symbolics, which is just information processing, again. Following that path, we end up with with the answer: A logical system is a program that checks that certain logical statements hold for the behavior of another program (model).
This isn’t precise enough for me to agree that it’s true. Is it a claim? A new definition?
Next:
For the axiom system, we can choose the set of Peano axioms.
I took “axiom system” to mean a set of axioms and rules for deducing their consequences (I know you probably mean the usual first-order logic deduction rules).
Although we can not check verify the correctness of every Peano formula in the model by substituting each possible value, we still can have an infinite loop where we could arrive at every substitution within a finite amount of steps.
What do you mean? Are you talking about something other process than the proof checking program?
I’ll wait for clarifications on those points before proceeding.
In order to define the semantics,…
This isn’t precise enough for me to agree that it’s true. Is it a claim? A new definition?
First: sorry for the bad grammar! Let me start with rephrasing the first sentence a bit more clearly:
“In order to define semantics, we need to define a map between the logic to the model ….”
It is correct that this description constrains semantics to maps between symbolically checkable systems. Physicists may not agree with this view and could say: “For me, semantics is a mapping from a formal system to a physical system that could be continuous or to which the Church thesis does not hold.”
Model theoreticians, however, define their notion of semantics between logical systems only. Therefore I think I am in a pretty good agreement with their nomenclature.
The message is that even if we consider a continuous system mathematically (for example real number), only those of its aspects matter which can be verified by captured by discrete information processing method. What I argue here is that this implicit projection to the discrete is best made explicit if view it from an algorithmic point of view.
Although we can not check verify the correctness …
..
What do you mean? Are you talking about something other process than the proof checking program?
It is more model checking: Given a statement like “For each x: F(x)”, since your input domain is countable, you can just loop over all substitutions. Although this program will not ever stop if the statement was true, but you can at least refute it in a finite number of steps if the statement is false. This is why I consider falsifiability important.
I agree that there is also a culprit: this is only true for simple expressions if you have only each quantifiers. For example, but not for more complex logical expressions like the twin primes conjecture, which can be cast as:
foreach x: exists y: prime(y) and prime(y+2)
Still this expression can be cast into the form: “T(x) halts for every input x.”, where T is the program that searches for the next pair of twin primes both bigger than n.
But what about longer sequences of quantifier, like
Can this still be casted into the form “T(x) halts for every input x”? If it would not then we needed to add another layer of logic around the stopping of Turing machine which would defeat our original purpose.
In fact, there is a neat trick to avoid that: You can define a program T’(n) which takes a single natural number and checks the validity (not f) for every substitution of total length < n. Then f is equivalent to the statatement: T’(n) does halt for every natural number n.
Which means we manage to hide the cascade of quantifiers within algorithm T’. Cool, hugh?
I know that material, but couldn’t figure out how your informal descriptions map to it in every case. (When following a proof, I like to make sure I understand every step, and fix problems before proceeding). If this isn’t intended as new, then I won’t sweat it.
The overall message is not really new technically, but its philosophical implications are somewhat surprising even for mathematicians. In general, looking at the same thing from different angles to helps to get acquire more thorough understanding even if it does not necessarily provide a clear short term benefit.
A few years ago, I chatted with a few very good mathematicians who were not aware (relatively straightforward) equivalence between the theorems of Turing and Goedel, but could see it within a few minutes and had no problem grasping the the inherent, natural connection between logical systems and program evaluating algorithms.
I think, that there is quite a bit of mysticism and misconception regarding mathematical logic, set theory, etc. by the general public. I think that it is valuable to put them in a context that clarifies their real, inherently algorithmic nature. It may be also helpful for cleaning up one’s own world view.
I am sorry if my account was unclear at points and I am glad to provide clarification to any unclear points.
An admirable response.
Here’s the first place you lost me:
This isn’t precise enough for me to agree that it’s true. Is it a claim? A new definition?
Next:
I took “axiom system” to mean a set of axioms and rules for deducing their consequences (I know you probably mean the usual first-order logic deduction rules).
What do you mean? Are you talking about something other process than the proof checking program?
I’ll wait for clarifications on those points before proceeding.
First: sorry for the bad grammar! Let me start with rephrasing the first sentence a bit more clearly:
“In order to define semantics, we need to define a map between the logic to the model ….”
It is correct that this description constrains semantics to maps between symbolically checkable systems. Physicists may not agree with this view and could say: “For me, semantics is a mapping from a formal system to a physical system that could be continuous or to which the Church thesis does not hold.”
Model theoreticians, however, define their notion of semantics between logical systems only. Therefore I think I am in a pretty good agreement with their nomenclature.
The message is that even if we consider a continuous system mathematically (for example real number), only those of its aspects matter which can be verified by captured by discrete information processing method. What I argue here is that this implicit projection to the discrete is best made explicit if view it from an algorithmic point of view.
It is more model checking: Given a statement like “For each x: F(x)”, since your input domain is countable, you can just loop over all substitutions. Although this program will not ever stop if the statement was true, but you can at least refute it in a finite number of steps if the statement is false. This is why I consider falsifiability important.
I agree that there is also a culprit: this is only true for simple expressions if you have only each quantifiers. For example, but not for more complex logical expressions like the twin primes conjecture, which can be cast as:
foreach x: exists y: prime(y) and prime(y+2)
Still this expression can be cast into the form: “T(x) halts for every input x.”, where T is the program that searches for the next pair of twin primes both bigger than n.
But what about longer sequences of quantifier, like
f = foreach a: exists b: foreach c: exists d: …. F(a,b,c,d,...)
Can this still be casted into the form “T(x) halts for every input x”? If it would not then we needed to add another layer of logic around the stopping of Turing machine which would defeat our original purpose.
In fact, there is a neat trick to avoid that: You can define a program T’(n) which takes a single natural number and checks the validity (not f) for every substitution of total length < n. Then f is equivalent to the statatement: T’(n) does halt for every natural number n.
Which means we manage to hide the cascade of quantifiers within algorithm T’. Cool, hugh?