Indeed. Circles are merely a map-tool geometers use to understand the underlying territory of Euclidean geometry, which is precisely real vector spaces (which can be studied axiomatically without ever using the word ‘circle’). So, circles don’t exist, but {x \in R² : |x|=r} does. (Plane geometry is one model of the formal system)
And how exactly would you define the word “circle” other than {X \in R² : |x|=r}?
(In other words, if a geometric locus of points in a plane equidistant to a certain point exists, but circles don’t, the two are different; what is then the latter?)
The locus exists, as a mathematical object (it’s the string “{x \in R²: |x|=r}”, not the set {x \in R² : |x|=r}). The “circle” on the other hand is a collection of points. You can apply syntactic (ie. mathematical) operators to a mathematical object; you can’t apply syntactic operators to a collection of points. It is syntactic systems and their productions (ie. mathematical systems and their strings) which exist.
Hmm. I’m not quite sure I understand why abstract symbols, strings and manipulations of those must exist in the a sense in which abstract points, sets of points and manipulations of those don’t, nor am I quite sure why exactly one can’t do “syntactic” operations with points and sets rather than symbols.
In my mind cellular automatons look very much like “syntactic manipulation of strings of symbols” right now, and I can’t quite tell why points etc. shouldn’t look the same, other than being continuous. And I’m pretty sure there’s someone out there doing (meta-)math using languages with variously infinite numbers of symbols arranged in variously infinite strings and manipulated by variously infinite syntactic rule sets applied a variously infinite number of times… In fact, rather than being convenient for different applications, I can’t quite tell what existence-relevant differences there are between those. Or in what way rule-based manipulations strings of symbols are “syntactic” and rule-based manipulations of sets of points aren’t—except for the fact that one is easy to implement by humans. In other words, how is compass and straightedge construction not syntactical?
(In terms of the tree-falling-in-the-forest problem, I’m not arguing about what sounds are, I’m just listing why I don’t understand what you mean by sound, in our case “existence”.)
[ETA. By “variously infinite” above I meant “infinite, with various cardinalities”. For the benefit of any future readers, note that I don’t know much about those other than very basic distinctions between countable and uncountable.]
Oh, I’m willing to admit variously infinite numbers of applications of the rules… that’s why transfinite induction doesn’t bother me in the slightest.
But, my objection to the existence of abstract points is: what’s the definition of a point? It’s defined by what it does, by duck-typing. For instance, a point in R² is an ordered pair of reals. Now, you could say “an ordered pair (x,y) is the set {x,{x,y}}”, but that’s silly, that’s not what an ordered pair is, it’s just a construction that exhibits the required behaviour: namely, a constructor from two input values, and an equality axiom “(a,b)==(c,d) iff a==c and b==d”. Yet, from a formal perspective at least, there are many models of those axiomata, and it’s absurd to claim that any one of those is what a point “is”—far more sensible to say that the point “is” its axiomata. Since those axiomata essentially consist of a list of valid string-rewriting rules (like (a,b)==(c,d) |- a==c), they are directly and explicitly syntactic.
Perhaps, indeed, there is a system more fundamental to mathematics than syntactics—but given that the classes of formal languages even over finite strings are “variously infinite” (since language classes are equivalent to computability classes, by something Curry-Howardesque), it seems to me that, by accepting variously infinite strings and running-times, one should find that all mathematical systems are inherently syntactic in nature.
Sadly this is difficult to prove, as all our existing formal methods are themselves explicitly syntactic and thus anything we can express formally by current means, we can express as syntax. If materialistic and mechanistic ideas about the nature of consciousness are valid, then in fact any mathematics conceivable by human thought are susceptible to syntactic interpretation (for, ultimately, there exists a “validity” predicate over mathematical deductions, and assuming that validity predicate is constant in all factors other than the mathematical deduction itself (which assumption I believe to hold, as I am a Platonist), that predicate has a syntactic expression though possibly one derived via the physics of the brain). This does not, however, rule out the possibility that there are things we might want to call ‘formal systems’ which are not syntactic in nature. It is my belief—and nothing more than that—that such things do not exist.
These might be stupid questions, but I’m encouraged by a recent post to ask them:
But, my objection to the existence of abstract points is: what’s the definition of a point? It’s defined by what it does, by duck-typing.
Doesn’t that apply to syntactic methods, too? It was my understanding that the symbols, strings and transformation rules don’t quite have a definition except for duck typing, i.e. “symbols are things that can be recognized as identical or distinct from each other”. (In fact, in at least one of the courses I took the teacher explicitly said something like “symbols are not defined”, though I don’t know if that is “common terminology” or just him or her being not quite sure how to explain their “abstractness”.)
And the phrase about ordered pairs applies just as well to ordered strings in syntax, doesn’t it? Isn’t the most common model of “strings” the Lisp-like pair-of-symbol-and-pair-of-symbol-and...?
Oh, wait a minute. Perhaps I got it. Is the following a fair summary of your attitude?
We can only reason rigorously by syntactic methods (at least, it’s the best we have). To reason about the “real world” we must model it syntactically, use syntactic methods for reasoning (produce allowed derivations), then “translate back” the conclusions to “real world” terms. The modelling part can be done in many ways—we can translate the properties of what we model in many ways—but a certain syntactic system has a unique non-ambiguous set of derivations, therefore the things we model from the “real world” are not quite real, only the syntax is.
I think that’s a very good summary indeed, in particular that the “unique non-ambiguous set of derivations” is what imbues the syntax with ‘reality’.
Symbols are indeed not defined, but the only means we have of duck-typing symbols is to do so symbolically (a symbol S is an object supporting an equality operator = with other symbols). You mention Lisp; the best mental model of symbols is Lisp gensyms (which, again, are objects supporting only one operator, equality).
conses of conses are indeed a common model of strings, but I’m not sure whether that matters—we’re interested in the syntax itself considered abstractly, rather than any representation of the syntax. Since ad-hoc infinite regress is not allowed, we must take something as primal (just as formal mathematics takes the ‘set’ as primal and constructs everything from set theory) and that is what I do with syntax.
As mathematics starts with axioms about sets and inference rules about sets, so I begin with meta-axioms about syntax and meta-inference rules about syntax. (I then—somewhat reflexively—consider meta²-axioms, then transfinitely induct. It’s a habit I’ve developed lately; a current project of mine is to work out how large a large ordinal kappa must be such that meta^kappa -syntax will prove the existence of ordinals larger than kappa, and then (by transfinite recursion shorter than kappa) prove the existence of [a given large cardinal, or the Von Neumann universe, or some other desired ‘big’ entity]. But that’s a topic for another post, I fear)
Indeed. Circles are merely a map-tool geometers use to understand the underlying territory of Euclidean geometry, which is precisely real vector spaces (which can be studied axiomatically without ever using the word ‘circle’). So, circles don’t exist, but {x \in R² : |x|=r} does. (Plane geometry is one model of the formal system)
And how exactly would you define the word “circle” other than {X \in R² : |x|=r}?
(In other words, if a geometric locus of points in a plane equidistant to a certain point exists, but circles don’t, the two are different; what is then the latter?)
The locus exists, as a mathematical object (it’s the string “{x \in R²: |x|=r}”, not the set {x \in R² : |x|=r}). The “circle” on the other hand is a collection of points. You can apply syntactic (ie. mathematical) operators to a mathematical object; you can’t apply syntactic operators to a collection of points. It is syntactic systems and their productions (ie. mathematical systems and their strings) which exist.
Hmm. I’m not quite sure I understand why abstract symbols, strings and manipulations of those must exist in the a sense in which abstract points, sets of points and manipulations of those don’t, nor am I quite sure why exactly one can’t do “syntactic” operations with points and sets rather than symbols.
In my mind cellular automatons look very much like “syntactic manipulation of strings of symbols” right now, and I can’t quite tell why points etc. shouldn’t look the same, other than being continuous. And I’m pretty sure there’s someone out there doing (meta-)math using languages with variously infinite numbers of symbols arranged in variously infinite strings and manipulated by variously infinite syntactic rule sets applied a variously infinite number of times… In fact, rather than being convenient for different applications, I can’t quite tell what existence-relevant differences there are between those. Or in what way rule-based manipulations strings of symbols are “syntactic” and rule-based manipulations of sets of points aren’t—except for the fact that one is easy to implement by humans. In other words, how is compass and straightedge construction not syntactical?
(In terms of the tree-falling-in-the-forest problem, I’m not arguing about what sounds are, I’m just listing why I don’t understand what you mean by sound, in our case “existence”.)
[ETA. By “variously infinite” above I meant “infinite, with various cardinalities”. For the benefit of any future readers, note that I don’t know much about those other than very basic distinctions between countable and uncountable.]
Oh, I’m willing to admit variously infinite numbers of applications of the rules… that’s why transfinite induction doesn’t bother me in the slightest.
But, my objection to the existence of abstract points is: what’s the definition of a point? It’s defined by what it does, by duck-typing. For instance, a point in R² is an ordered pair of reals. Now, you could say “an ordered pair (x,y) is the set {x,{x,y}}”, but that’s silly, that’s not what an ordered pair is, it’s just a construction that exhibits the required behaviour: namely, a constructor from two input values, and an equality axiom “(a,b)==(c,d) iff a==c and b==d”. Yet, from a formal perspective at least, there are many models of those axiomata, and it’s absurd to claim that any one of those is what a point “is”—far more sensible to say that the point “is” its axiomata. Since those axiomata essentially consist of a list of valid string-rewriting rules (like (a,b)==(c,d) |- a==c), they are directly and explicitly syntactic.
Perhaps, indeed, there is a system more fundamental to mathematics than syntactics—but given that the classes of formal languages even over finite strings are “variously infinite” (since language classes are equivalent to computability classes, by something Curry-Howardesque), it seems to me that, by accepting variously infinite strings and running-times, one should find that all mathematical systems are inherently syntactic in nature.
Sadly this is difficult to prove, as all our existing formal methods are themselves explicitly syntactic and thus anything we can express formally by current means, we can express as syntax. If materialistic and mechanistic ideas about the nature of consciousness are valid, then in fact any mathematics conceivable by human thought are susceptible to syntactic interpretation (for, ultimately, there exists a “validity” predicate over mathematical deductions, and assuming that validity predicate is constant in all factors other than the mathematical deduction itself (which assumption I believe to hold, as I am a Platonist), that predicate has a syntactic expression though possibly one derived via the physics of the brain). This does not, however, rule out the possibility that there are things we might want to call ‘formal systems’ which are not syntactic in nature. It is my belief—and nothing more than that—that such things do not exist.
These might be stupid questions, but I’m encouraged by a recent post to ask them:
Doesn’t that apply to syntactic methods, too? It was my understanding that the symbols, strings and transformation rules don’t quite have a definition except for duck typing, i.e. “symbols are things that can be recognized as identical or distinct from each other”. (In fact, in at least one of the courses I took the teacher explicitly said something like “symbols are not defined”, though I don’t know if that is “common terminology” or just him or her being not quite sure how to explain their “abstractness”.)
And the phrase about ordered pairs applies just as well to ordered strings in syntax, doesn’t it? Isn’t the most common model of “strings” the Lisp-like pair-of-symbol-and-pair-of-symbol-and...?
Oh, wait a minute. Perhaps I got it. Is the following a fair summary of your attitude?
We can only reason rigorously by syntactic methods (at least, it’s the best we have). To reason about the “real world” we must model it syntactically, use syntactic methods for reasoning (produce allowed derivations), then “translate back” the conclusions to “real world” terms. The modelling part can be done in many ways—we can translate the properties of what we model in many ways—but a certain syntactic system has a unique non-ambiguous set of derivations, therefore the things we model from the “real world” are not quite real, only the syntax is.
I think that’s a very good summary indeed, in particular that the “unique non-ambiguous set of derivations” is what imbues the syntax with ‘reality’.
Symbols are indeed not defined, but the only means we have of duck-typing symbols is to do so symbolically (a symbol S is an object supporting an equality operator = with other symbols). You mention Lisp; the best mental model of symbols is Lisp gensyms (which, again, are objects supporting only one operator, equality).
conses of conses are indeed a common model of strings, but I’m not sure whether that matters—we’re interested in the syntax itself considered abstractly, rather than any representation of the syntax. Since ad-hoc infinite regress is not allowed, we must take something as primal (just as formal mathematics takes the ‘set’ as primal and constructs everything from set theory) and that is what I do with syntax.
As mathematics starts with axioms about sets and inference rules about sets, so I begin with meta-axioms about syntax and meta-inference rules about syntax. (I then—somewhat reflexively—consider meta²-axioms, then transfinitely induct. It’s a habit I’ve developed lately; a current project of mine is to work out how large a large ordinal kappa must be such that meta^kappa -syntax will prove the existence of ordinals larger than kappa, and then (by transfinite recursion shorter than kappa) prove the existence of [a given large cardinal, or the Von Neumann universe, or some other desired ‘big’ entity]. But that’s a topic for another post, I fear)