These might be stupid questions, but I’m encouraged by a recent post to ask them:
But, my objection to the existence of abstract points is: what’s the definition of a point? It’s defined by what it does, by duck-typing.
Doesn’t that apply to syntactic methods, too? It was my understanding that the symbols, strings and transformation rules don’t quite have a definition except for duck typing, i.e. “symbols are things that can be recognized as identical or distinct from each other”. (In fact, in at least one of the courses I took the teacher explicitly said something like “symbols are not defined”, though I don’t know if that is “common terminology” or just him or her being not quite sure how to explain their “abstractness”.)
And the phrase about ordered pairs applies just as well to ordered strings in syntax, doesn’t it? Isn’t the most common model of “strings” the Lisp-like pair-of-symbol-and-pair-of-symbol-and...?
Oh, wait a minute. Perhaps I got it. Is the following a fair summary of your attitude?
We can only reason rigorously by syntactic methods (at least, it’s the best we have). To reason about the “real world” we must model it syntactically, use syntactic methods for reasoning (produce allowed derivations), then “translate back” the conclusions to “real world” terms. The modelling part can be done in many ways—we can translate the properties of what we model in many ways—but a certain syntactic system has a unique non-ambiguous set of derivations, therefore the things we model from the “real world” are not quite real, only the syntax is.
I think that’s a very good summary indeed, in particular that the “unique non-ambiguous set of derivations” is what imbues the syntax with ‘reality’.
Symbols are indeed not defined, but the only means we have of duck-typing symbols is to do so symbolically (a symbol S is an object supporting an equality operator = with other symbols). You mention Lisp; the best mental model of symbols is Lisp gensyms (which, again, are objects supporting only one operator, equality).
conses of conses are indeed a common model of strings, but I’m not sure whether that matters—we’re interested in the syntax itself considered abstractly, rather than any representation of the syntax. Since ad-hoc infinite regress is not allowed, we must take something as primal (just as formal mathematics takes the ‘set’ as primal and constructs everything from set theory) and that is what I do with syntax.
As mathematics starts with axioms about sets and inference rules about sets, so I begin with meta-axioms about syntax and meta-inference rules about syntax. (I then—somewhat reflexively—consider meta²-axioms, then transfinitely induct. It’s a habit I’ve developed lately; a current project of mine is to work out how large a large ordinal kappa must be such that meta^kappa -syntax will prove the existence of ordinals larger than kappa, and then (by transfinite recursion shorter than kappa) prove the existence of [a given large cardinal, or the Von Neumann universe, or some other desired ‘big’ entity]. But that’s a topic for another post, I fear)
These might be stupid questions, but I’m encouraged by a recent post to ask them:
Doesn’t that apply to syntactic methods, too? It was my understanding that the symbols, strings and transformation rules don’t quite have a definition except for duck typing, i.e. “symbols are things that can be recognized as identical or distinct from each other”. (In fact, in at least one of the courses I took the teacher explicitly said something like “symbols are not defined”, though I don’t know if that is “common terminology” or just him or her being not quite sure how to explain their “abstractness”.)
And the phrase about ordered pairs applies just as well to ordered strings in syntax, doesn’t it? Isn’t the most common model of “strings” the Lisp-like pair-of-symbol-and-pair-of-symbol-and...?
Oh, wait a minute. Perhaps I got it. Is the following a fair summary of your attitude?
We can only reason rigorously by syntactic methods (at least, it’s the best we have). To reason about the “real world” we must model it syntactically, use syntactic methods for reasoning (produce allowed derivations), then “translate back” the conclusions to “real world” terms. The modelling part can be done in many ways—we can translate the properties of what we model in many ways—but a certain syntactic system has a unique non-ambiguous set of derivations, therefore the things we model from the “real world” are not quite real, only the syntax is.
I think that’s a very good summary indeed, in particular that the “unique non-ambiguous set of derivations” is what imbues the syntax with ‘reality’.
Symbols are indeed not defined, but the only means we have of duck-typing symbols is to do so symbolically (a symbol S is an object supporting an equality operator = with other symbols). You mention Lisp; the best mental model of symbols is Lisp gensyms (which, again, are objects supporting only one operator, equality).
conses of conses are indeed a common model of strings, but I’m not sure whether that matters—we’re interested in the syntax itself considered abstractly, rather than any representation of the syntax. Since ad-hoc infinite regress is not allowed, we must take something as primal (just as formal mathematics takes the ‘set’ as primal and constructs everything from set theory) and that is what I do with syntax.
As mathematics starts with axioms about sets and inference rules about sets, so I begin with meta-axioms about syntax and meta-inference rules about syntax. (I then—somewhat reflexively—consider meta²-axioms, then transfinitely induct. It’s a habit I’ve developed lately; a current project of mine is to work out how large a large ordinal kappa must be such that meta^kappa -syntax will prove the existence of ordinals larger than kappa, and then (by transfinite recursion shorter than kappa) prove the existence of [a given large cardinal, or the Von Neumann universe, or some other desired ‘big’ entity]. But that’s a topic for another post, I fear)