When actually run, it makes two pieces of the territory change so that they contain a pattern that we would recognize as “5”.
Right, but it doesn’t attach little “” tags to that pattern.
Can you give an example of information with that property?
Well, trivially not, because by giving the example I create a representation. But, does a theorem become true when it is proven? That seems to me to be absurd. Counterfactually, suppose there were no minds. Would that prevent it from being true that “PA proves 2+2=4”? That also seems absurd. I can’t prove it’s absurd, but that’s because a rock doesn’t implement modus ponens (no universally compelling arguments).
If X will happen tomorrow, then it is a fact that X will happen tomorrow, even though (ignoring for now timeless physics) tomorrow “doesn’t exist yet”, and the information “X will happen tomorrow” needn’t be represented anywhere to be true; it inheres in the state of the universe today + {equations of physics}. Information which can be arrived at by computation from other, existing information, exists—or perhaps we should move the ‘other, existing information’ across the turnstile: it is an existing truth that (Information which can be arrived at by computation from foo) can be arrived at by computation from foo. Tautologies are true.
Can you give an example of information with that property?
Well, trivially not, because by giving the example I create a representation. But, does a theorem become true when it is proven?
I said information, you said theorem—I don’t think it’s the same.
I was expecting you to say something like “the 3^^^3rd digit of pi”, and then I was going to say something, but now that I think about it, I think it’s time to taboo “exist”.
I said information, you said theorem—I don’t think it’s the same.
“Theorem Foo is true in Theory T” is information. Though the 3^^^3rd digit of pi is good too; I want to hear what you have to say about it.
I think it’s time to taboo “exist”.
Ok… “exist” doesn’t have a referent. Any attempt to define it will either be special pleading (my universe is special, it “exists”, because it’s the one I live in!), or will give a definition that applies equally to all mathematical structures.
Though the 3^^^3rd digit of pi is good too; I want to hear what you have to say about it.
I was going to say, it {can be calculated}-exists, but it does not {is extant in the territory}-exist. It certainly has a value, but we will never know what it is. No concrete instance of that information will ever be formed, at least not in this universe. (Barring new phyisics allowing vastly more computation!)
“exist” doesn’t have a referent. Any attempt to define it will either be special pleading (my universe is special, it “exists”, because it’s the one I live in!), or will give a definition that applies equally to all mathematical structures.
Thanks, I think that’s the clearest thing you’ve said so far.
I think my own concept of “exist” has an implicit parameter of “in the universe” or “in the territory”, so it breaks down when applied to the uni/multiverse itself (what could the multiverse possibly exist in?). Much like “what was before the big bang” is not actually a meaningful question because “before” is a time-ish word and whatever it is that we call time didn’t exist before the big bang.
But then, how do you determine whether information exists-in-the-universe at all? Does the number 2 exist-in-the-universe? (I can pick up 2 pebbles, so I’m guessing ‘yes’.) Does the number 3^^^3 exist-in-the-universe? Does the number N = total count of particles in the universe exist-in-the-universe? (I’m guessing ‘yes’, because it’s represented by the universe.) Does N+1 exist-in-the-universe? (After all, I can consider {particles in the universe} union {{particles in the universe}}, with cardinality N+1) If you allow encodings other than unary, let N = largest number which can be represented using all the particles in the universe. But I can coherently talk about N+1, because I don’t need to know the value of a number to do arithmetic on it (if N is even, then N+1 is odd, even though I can’t represent the value of N+1). Does the set of natural numbers exist-in-the-universe? If so, I can induct—and therefore, by induction on induction itself, I claim I can perform transfinite induction (aka ‘scary dots’) in which case the first uncountable ordinal exists-in-the-universe, which is something I’d quite like to conclude.
I’m not saying my universe is special just because it’s the one I live in, in fact I can accept the reality of lots of Everett branches in which I don’t live in.
More to the point I believe that the reality of those Everett branches preexisted your mathematical models of them, or indeed the human invention of mathematics as a whole. Mathematical structure were made in imitation of the universe—not vice versa.
Ok, now taboo your uses of “reality” and “preexisted” in the above comment, because I can’t conceive of meanings of those words in which your comment makes sense.
Ok, now taboo your uses of “reality” and “preexisted” in the above comment, because I can’t conceive of meanings of those words in which your comment makes sense.
The thing about tabooing words, is that we find it easy to taboo words that are just confused concepts (it’s easy to taboo the word ‘sound’ and refer to acoustical experience vs acoustic vibrations), and we find it hard to taboo words that are truly about the fundamentals of our universe, such as ‘causality’ or ‘reality’ or ‘existence’ or ‘subjective experience’.
I find it much easier to taboo the words that you think fundamentals—words like ‘mathematical equations’, namely ‘the orderly manipulations of symbols that human brains can learn to correspond to concepts in the material universe in order to predict happenings in said material universe’
To put it differently: Why don’t you taboo the words “mathematics” and “equations” first, and see if your argument still makes any sense
we find it hard to taboo words that are truly about the fundamentals of our universe, such as ‘causality’ or ‘reality’ or ‘existence’ or ‘subjective experience’.
I tabooed “exist”, above, by what I think it means. You think ‘existence’ is fundamental, but you’ve not given me enough of a definition for me to understand your arguments that use it as an untabooable word.
words like ‘mathematical equations’
I’d say that (or rather ‘mathematics’) is just ‘the orderly manipulations of symbols’. Or, as I prefer to phrase it, ‘symbol games’.
‘correspond to concepts in the material universe in order to predict happenings in said material universe’
That’s applied mathematics (or, perhaps, physics), an entirely different beast with an entirely different epistemic status.
Why don’t you taboo the words “mathematics” and “equations” first, and see if your argument still makes any sense
Manipulations of symbols according to formal rules are the ontological basis, and our perception of “physical reality” results merely from our status as collections of symbols in the abstract Platonic realm that defines the convergent results of those manipulations, “existence” being merely how the algorithm feels from inside.
“Manipulations of symbols according to formal rules are the ontological basis”
I understand “symbols” to be a cognitive shorthand for our brains representation of structures in reality. I don’t understand the meaning of the word “symbols” in the abstract, without a brain to interpret them with and map them onto reality.
“existence” being merely how the algorithm feels from inside.
This doesn’t really explain anything to me, it just sounds like wisdom.
I don’t understand the meaning of the word “symbols” in the abstract, without a brain to interpret them with and map them onto reality.
Think in terms of LISP gensyms—objects which themselves support only one operation, ==. The only thing we can say about (rg45t) is that it’s the same as (rg45t) but not the same as (2qox), whereas we think we know what (forall) means (in the game of set theory) - in fact the only reason (forall) has a meaning is because some of our symbol-manipulating rules mention it.
As I understand it ec429’s intuition goes a bit like this:
Take P1, a program that serially computes the digits in the decimal expansion of π. Even if it’s the first time in the history of the universe that that program is run, it doesn’t feel like the person who ran the program (or the computer itself) created that sequence of digits. It feels like that sequence “always existed” (in fact, it feels like it “exists” regardless of running the program, or the existence of the Universe and the time flow it contains), and running the program just led to discovering its precise shape.(#)
Now take P2, a program that computes (deterministically) a simulation of, say, a human observer in a universe locally similar(##) to ours, but perhaps slightly different( ###) to remove indexing uncertainty. Applying intuition directly to P2, it feels that the simulation isn’t a real world, and whatever the observer inside feels and thinks (including about “existence”) is kind of “fake”; i.e., it feels like we’re creating it, and it wouldn’t exist if we didn’t run the program.
But there is actually no obvious difference from P1: the exact results of what happens inside P2, including the feelings and thoughts of the observer, are predetermined, and are exclusively the consequence of a series of symbolic manipulations or “equation solving” of the exact same kind as those that “generate” the decimals of π.
So either:
1) we are “creating” the sequence of decimals of π whenever we (first? or every time?) compute it, and if so we would also “create” the simulated world when we run P2, or
2) the sequence of digits in the expansion of π “exists” indifferently of us (and even our universe), and we merely discover (or embody) it when we compute it, and if so the simulated world of P2 also “exists” indifferently of us, and we simply discover (or embody) it when we execute P2.
I think ec429 “sides” with the first intuition, and you tend more towards the second. I just noticed I am confused.
(I kind of give a bit more weight to the first intuition, since P2 has a lot more going on to confuse my intuitions. But still, there’s no obvious reason why intuitions of my brain about abstract things like the existence of a particular sequence of numbers might match anything “real”.)
(#: This intuition is not necessarily universal, it’s just what I think is at the source ec429’s post.)
(##: For example, a completely deterministic program that uses 10^5 bit numbers to simulate all particles in a kilometer-wide radius copy of our world around, say, you at some point while reading this post, with a ridiculously high-quality pseudo-random number generator used to select a single Everett “slice”, and with a simple boundary chosen such that conditions inside the bubble remain livable for a few hours. This (or something very like it, I didn’t think too long about the exponents) is probably implementable with Jupiter-brain-class technology in our universe even with non-augumented-human–written software, not necessarily in “real-time”, and it’s hard to argue that the observer wouldn’t be really a human, at least while the simulation is running.)
(###: E.g., a red cat walks teleports inside the bubble when it didn’t in the “real” world. For extra fun, imagine that the simulated human thinks about what it means to exist while this happens.)
I think ec429 “sides” with the first intuition, and you tend more towards the second. I just noticed I am confused.
No, I’d say nearer the second—the mathematical expression of the world of P2 “exists” indifferently of us, and has just as much “existence” as we do. Rocks and trees and leptons, and their equivalents in P2-world, however, don’t “exist”; only their corresponding ‘pieces of math’ flowing through the equations can be said to “exist”.
I don’t quite get what you mean, then. If the various “pieces of math” describe no more and no less than exactly the rocks and trees and leptons, how can one distinguish between the two?
Would you say the math of “x^2 + y^2 = r^2” exists but circles don’t?
Indeed. Circles are merely a map-tool geometers use to understand the underlying territory of Euclidean geometry, which is precisely real vector spaces (which can be studied axiomatically without ever using the word ‘circle’). So, circles don’t exist, but {x \in R² : |x|=r} does. (Plane geometry is one model of the formal system)
And how exactly would you define the word “circle” other than {X \in R² : |x|=r}?
(In other words, if a geometric locus of points in a plane equidistant to a certain point exists, but circles don’t, the two are different; what is then the latter?)
The locus exists, as a mathematical object (it’s the string “{x \in R²: |x|=r}”, not the set {x \in R² : |x|=r}). The “circle” on the other hand is a collection of points. You can apply syntactic (ie. mathematical) operators to a mathematical object; you can’t apply syntactic operators to a collection of points. It is syntactic systems and their productions (ie. mathematical systems and their strings) which exist.
Hmm. I’m not quite sure I understand why abstract symbols, strings and manipulations of those must exist in the a sense in which abstract points, sets of points and manipulations of those don’t, nor am I quite sure why exactly one can’t do “syntactic” operations with points and sets rather than symbols.
In my mind cellular automatons look very much like “syntactic manipulation of strings of symbols” right now, and I can’t quite tell why points etc. shouldn’t look the same, other than being continuous. And I’m pretty sure there’s someone out there doing (meta-)math using languages with variously infinite numbers of symbols arranged in variously infinite strings and manipulated by variously infinite syntactic rule sets applied a variously infinite number of times… In fact, rather than being convenient for different applications, I can’t quite tell what existence-relevant differences there are between those. Or in what way rule-based manipulations strings of symbols are “syntactic” and rule-based manipulations of sets of points aren’t—except for the fact that one is easy to implement by humans. In other words, how is compass and straightedge construction not syntactical?
(In terms of the tree-falling-in-the-forest problem, I’m not arguing about what sounds are, I’m just listing why I don’t understand what you mean by sound, in our case “existence”.)
[ETA. By “variously infinite” above I meant “infinite, with various cardinalities”. For the benefit of any future readers, note that I don’t know much about those other than very basic distinctions between countable and uncountable.]
Oh, I’m willing to admit variously infinite numbers of applications of the rules… that’s why transfinite induction doesn’t bother me in the slightest.
But, my objection to the existence of abstract points is: what’s the definition of a point? It’s defined by what it does, by duck-typing. For instance, a point in R² is an ordered pair of reals. Now, you could say “an ordered pair (x,y) is the set {x,{x,y}}”, but that’s silly, that’s not what an ordered pair is, it’s just a construction that exhibits the required behaviour: namely, a constructor from two input values, and an equality axiom “(a,b)==(c,d) iff a==c and b==d”. Yet, from a formal perspective at least, there are many models of those axiomata, and it’s absurd to claim that any one of those is what a point “is”—far more sensible to say that the point “is” its axiomata. Since those axiomata essentially consist of a list of valid string-rewriting rules (like (a,b)==(c,d) |- a==c), they are directly and explicitly syntactic.
Perhaps, indeed, there is a system more fundamental to mathematics than syntactics—but given that the classes of formal languages even over finite strings are “variously infinite” (since language classes are equivalent to computability classes, by something Curry-Howardesque), it seems to me that, by accepting variously infinite strings and running-times, one should find that all mathematical systems are inherently syntactic in nature.
Sadly this is difficult to prove, as all our existing formal methods are themselves explicitly syntactic and thus anything we can express formally by current means, we can express as syntax. If materialistic and mechanistic ideas about the nature of consciousness are valid, then in fact any mathematics conceivable by human thought are susceptible to syntactic interpretation (for, ultimately, there exists a “validity” predicate over mathematical deductions, and assuming that validity predicate is constant in all factors other than the mathematical deduction itself (which assumption I believe to hold, as I am a Platonist), that predicate has a syntactic expression though possibly one derived via the physics of the brain). This does not, however, rule out the possibility that there are things we might want to call ‘formal systems’ which are not syntactic in nature. It is my belief—and nothing more than that—that such things do not exist.
These might be stupid questions, but I’m encouraged by a recent post to ask them:
But, my objection to the existence of abstract points is: what’s the definition of a point? It’s defined by what it does, by duck-typing.
Doesn’t that apply to syntactic methods, too? It was my understanding that the symbols, strings and transformation rules don’t quite have a definition except for duck typing, i.e. “symbols are things that can be recognized as identical or distinct from each other”. (In fact, in at least one of the courses I took the teacher explicitly said something like “symbols are not defined”, though I don’t know if that is “common terminology” or just him or her being not quite sure how to explain their “abstractness”.)
And the phrase about ordered pairs applies just as well to ordered strings in syntax, doesn’t it? Isn’t the most common model of “strings” the Lisp-like pair-of-symbol-and-pair-of-symbol-and...?
Oh, wait a minute. Perhaps I got it. Is the following a fair summary of your attitude?
We can only reason rigorously by syntactic methods (at least, it’s the best we have). To reason about the “real world” we must model it syntactically, use syntactic methods for reasoning (produce allowed derivations), then “translate back” the conclusions to “real world” terms. The modelling part can be done in many ways—we can translate the properties of what we model in many ways—but a certain syntactic system has a unique non-ambiguous set of derivations, therefore the things we model from the “real world” are not quite real, only the syntax is.
I think that’s a very good summary indeed, in particular that the “unique non-ambiguous set of derivations” is what imbues the syntax with ‘reality’.
Symbols are indeed not defined, but the only means we have of duck-typing symbols is to do so symbolically (a symbol S is an object supporting an equality operator = with other symbols). You mention Lisp; the best mental model of symbols is Lisp gensyms (which, again, are objects supporting only one operator, equality).
conses of conses are indeed a common model of strings, but I’m not sure whether that matters—we’re interested in the syntax itself considered abstractly, rather than any representation of the syntax. Since ad-hoc infinite regress is not allowed, we must take something as primal (just as formal mathematics takes the ‘set’ as primal and constructs everything from set theory) and that is what I do with syntax.
As mathematics starts with axioms about sets and inference rules about sets, so I begin with meta-axioms about syntax and meta-inference rules about syntax. (I then—somewhat reflexively—consider meta²-axioms, then transfinitely induct. It’s a habit I’ve developed lately; a current project of mine is to work out how large a large ordinal kappa must be such that meta^kappa -syntax will prove the existence of ordinals larger than kappa, and then (by transfinite recursion shorter than kappa) prove the existence of [a given large cardinal, or the Von Neumann universe, or some other desired ‘big’ entity]. But that’s a topic for another post, I fear)
Right, but it doesn’t attach little “” tags to that pattern.
Well, trivially not, because by giving the example I create a representation. But, does a theorem become true when it is proven? That seems to me to be absurd. Counterfactually, suppose there were no minds. Would that prevent it from being true that “PA proves 2+2=4”? That also seems absurd. I can’t prove it’s absurd, but that’s because a rock doesn’t implement modus ponens (no universally compelling arguments).
If X will happen tomorrow, then it is a fact that X will happen tomorrow, even though (ignoring for now timeless physics) tomorrow “doesn’t exist yet”, and the information “X will happen tomorrow” needn’t be represented anywhere to be true; it inheres in the state of the universe today + {equations of physics}. Information which can be arrived at by computation from other, existing information, exists—or perhaps we should move the ‘other, existing information’ across the turnstile: it is an existing truth that (Information which can be arrived at by computation from foo) can be arrived at by computation from foo. Tautologies are true.
I said information, you said theorem—I don’t think it’s the same.
I was expecting you to say something like “the 3^^^3rd digit of pi”, and then I was going to say something, but now that I think about it, I think it’s time to taboo “exist”.
“Theorem Foo is true in Theory T” is information. Though the 3^^^3rd digit of pi is good too; I want to hear what you have to say about it.
Ok… “exist” doesn’t have a referent. Any attempt to define it will either be special pleading (my universe is special, it “exists”, because it’s the one I live in!), or will give a definition that applies equally to all mathematical structures.
I was going to say, it {can be calculated}-exists, but it does not {is extant in the territory}-exist. It certainly has a value, but we will never know what it is. No concrete instance of that information will ever be formed, at least not in this universe. (Barring new phyisics allowing vastly more computation!)
Thanks, I think that’s the clearest thing you’ve said so far.
I think my own concept of “exist” has an implicit parameter of “in the universe” or “in the territory”, so it breaks down when applied to the uni/multiverse itself (what could the multiverse possibly exist in?). Much like “what was before the big bang” is not actually a meaningful question because “before” is a time-ish word and whatever it is that we call time didn’t exist before the big bang.
But then, how do you determine whether information exists-in-the-universe at all? Does the number 2 exist-in-the-universe? (I can pick up 2 pebbles, so I’m guessing ‘yes’.) Does the number 3^^^3 exist-in-the-universe? Does the number N = total count of particles in the universe exist-in-the-universe? (I’m guessing ‘yes’, because it’s represented by the universe.) Does N+1 exist-in-the-universe? (After all, I can consider {particles in the universe} union {{particles in the universe}}, with cardinality N+1) If you allow encodings other than unary, let N = largest number which can be represented using all the particles in the universe. But I can coherently talk about N+1, because I don’t need to know the value of a number to do arithmetic on it (if N is even, then N+1 is odd, even though I can’t represent the value of N+1). Does the set of natural numbers exist-in-the-universe? If so, I can induct—and therefore, by induction on induction itself, I claim I can perform transfinite induction (aka ‘scary dots’) in which case the first uncountable ordinal exists-in-the-universe, which is something I’d quite like to conclude.
So where does it stop being a heap?
I’m not saying my universe is special just because it’s the one I live in, in fact I can accept the reality of lots of Everett branches in which I don’t live in.
More to the point I believe that the reality of those Everett branches preexisted your mathematical models of them, or indeed the human invention of mathematics as a whole. Mathematical structure were made in imitation of the universe—not vice versa.
Ok, now taboo your uses of “reality” and “preexisted” in the above comment, because I can’t conceive of meanings of those words in which your comment makes sense.
The thing about tabooing words, is that we find it easy to taboo words that are just confused concepts (it’s easy to taboo the word ‘sound’ and refer to acoustical experience vs acoustic vibrations), and we find it hard to taboo words that are truly about the fundamentals of our universe, such as ‘causality’ or ‘reality’ or ‘existence’ or ‘subjective experience’.
I find it much easier to taboo the words that you think fundamentals—words like ‘mathematical equations’, namely ‘the orderly manipulations of symbols that human brains can learn to correspond to concepts in the material universe in order to predict happenings in said material universe’
To put it differently: Why don’t you taboo the words “mathematics” and “equations” first, and see if your argument still makes any sense
I tabooed “exist”, above, by what I think it means. You think ‘existence’ is fundamental, but you’ve not given me enough of a definition for me to understand your arguments that use it as an untabooable word.
I’d say that (or rather ‘mathematics’) is just ‘the orderly manipulations of symbols’. Or, as I prefer to phrase it, ‘symbol games’.
That’s applied mathematics (or, perhaps, physics), an entirely different beast with an entirely different epistemic status.
Manipulations of symbols according to formal rules are the ontological basis, and our perception of “physical reality” results merely from our status as collections of symbols in the abstract Platonic realm that defines the convergent results of those manipulations, “existence” being merely how the algorithm feels from inside.
Yup, still makes sense to me!
I understand “symbols” to be a cognitive shorthand for our brains representation of structures in reality. I don’t understand the meaning of the word “symbols” in the abstract, without a brain to interpret them with and map them onto reality.
This doesn’t really explain anything to me, it just sounds like wisdom.
Think in terms of LISP gensyms—objects which themselves support only one operation, ==. The only thing we can say about (rg45t) is that it’s the same as (rg45t) but not the same as (2qox), whereas we think we know what (forall) means (in the game of set theory) - in fact the only reason (forall) has a meaning is because some of our symbol-manipulating rules mention it.
As I understand it ec429’s intuition goes a bit like this:
Take P1, a program that serially computes the digits in the decimal expansion of π. Even if it’s the first time in the history of the universe that that program is run, it doesn’t feel like the person who ran the program (or the computer itself) created that sequence of digits. It feels like that sequence “always existed” (in fact, it feels like it “exists” regardless of running the program, or the existence of the Universe and the time flow it contains), and running the program just led to discovering its precise shape.(#)
Now take P2, a program that computes (deterministically) a simulation of, say, a human observer in a universe locally similar(##) to ours, but perhaps slightly different( ###) to remove indexing uncertainty. Applying intuition directly to P2, it feels that the simulation isn’t a real world, and whatever the observer inside feels and thinks (including about “existence”) is kind of “fake”; i.e., it feels like we’re creating it, and it wouldn’t exist if we didn’t run the program.
But there is actually no obvious difference from P1: the exact results of what happens inside P2, including the feelings and thoughts of the observer, are predetermined, and are exclusively the consequence of a series of symbolic manipulations or “equation solving” of the exact same kind as those that “generate” the decimals of π.
So either: 1) we are “creating” the sequence of decimals of π whenever we (first? or every time?) compute it, and if so we would also “create” the simulated world when we run P2, or 2) the sequence of digits in the expansion of π “exists” indifferently of us (and even our universe), and we merely discover (or embody) it when we compute it, and if so the simulated world of P2 also “exists” indifferently of us, and we simply discover (or embody) it when we execute P2.
I think ec429 “sides” with the first intuition, and you tend more towards the second. I just noticed I am confused.
(I kind of give a bit more weight to the first intuition, since P2 has a lot more going on to confuse my intuitions. But still, there’s no obvious reason why intuitions of my brain about abstract things like the existence of a particular sequence of numbers might match anything “real”.)
(#: This intuition is not necessarily universal, it’s just what I think is at the source ec429’s post.)
(##: For example, a completely deterministic program that uses 10^5 bit numbers to simulate all particles in a kilometer-wide radius copy of our world around, say, you at some point while reading this post, with a ridiculously high-quality pseudo-random number generator used to select a single Everett “slice”, and with a simple boundary chosen such that conditions inside the bubble remain livable for a few hours. This (or something very like it, I didn’t think too long about the exponents) is probably implementable with Jupiter-brain-class technology in our universe even with non-augumented-human–written software, not necessarily in “real-time”, and it’s hard to argue that the observer wouldn’t be really a human, at least while the simulation is running.)
(###: E.g., a red cat walks teleports inside the bubble when it didn’t in the “real” world. For extra fun, imagine that the simulated human thinks about what it means to exist while this happens.)
No, I’d say nearer the second—the mathematical expression of the world of P2 “exists” indifferently of us, and has just as much “existence” as we do. Rocks and trees and leptons, and their equivalents in P2-world, however, don’t “exist”; only their corresponding ‘pieces of math’ flowing through the equations can be said to “exist”.
I don’t quite get what you mean, then. If the various “pieces of math” describe no more and no less than exactly the rocks and trees and leptons, how can one distinguish between the two?
Would you say the math of “x^2 + y^2 = r^2” exists but circles don’t?
Indeed. Circles are merely a map-tool geometers use to understand the underlying territory of Euclidean geometry, which is precisely real vector spaces (which can be studied axiomatically without ever using the word ‘circle’). So, circles don’t exist, but {x \in R² : |x|=r} does. (Plane geometry is one model of the formal system)
And how exactly would you define the word “circle” other than {X \in R² : |x|=r}?
(In other words, if a geometric locus of points in a plane equidistant to a certain point exists, but circles don’t, the two are different; what is then the latter?)
The locus exists, as a mathematical object (it’s the string “{x \in R²: |x|=r}”, not the set {x \in R² : |x|=r}). The “circle” on the other hand is a collection of points. You can apply syntactic (ie. mathematical) operators to a mathematical object; you can’t apply syntactic operators to a collection of points. It is syntactic systems and their productions (ie. mathematical systems and their strings) which exist.
Hmm. I’m not quite sure I understand why abstract symbols, strings and manipulations of those must exist in the a sense in which abstract points, sets of points and manipulations of those don’t, nor am I quite sure why exactly one can’t do “syntactic” operations with points and sets rather than symbols.
In my mind cellular automatons look very much like “syntactic manipulation of strings of symbols” right now, and I can’t quite tell why points etc. shouldn’t look the same, other than being continuous. And I’m pretty sure there’s someone out there doing (meta-)math using languages with variously infinite numbers of symbols arranged in variously infinite strings and manipulated by variously infinite syntactic rule sets applied a variously infinite number of times… In fact, rather than being convenient for different applications, I can’t quite tell what existence-relevant differences there are between those. Or in what way rule-based manipulations strings of symbols are “syntactic” and rule-based manipulations of sets of points aren’t—except for the fact that one is easy to implement by humans. In other words, how is compass and straightedge construction not syntactical?
(In terms of the tree-falling-in-the-forest problem, I’m not arguing about what sounds are, I’m just listing why I don’t understand what you mean by sound, in our case “existence”.)
[ETA. By “variously infinite” above I meant “infinite, with various cardinalities”. For the benefit of any future readers, note that I don’t know much about those other than very basic distinctions between countable and uncountable.]
Oh, I’m willing to admit variously infinite numbers of applications of the rules… that’s why transfinite induction doesn’t bother me in the slightest.
But, my objection to the existence of abstract points is: what’s the definition of a point? It’s defined by what it does, by duck-typing. For instance, a point in R² is an ordered pair of reals. Now, you could say “an ordered pair (x,y) is the set {x,{x,y}}”, but that’s silly, that’s not what an ordered pair is, it’s just a construction that exhibits the required behaviour: namely, a constructor from two input values, and an equality axiom “(a,b)==(c,d) iff a==c and b==d”. Yet, from a formal perspective at least, there are many models of those axiomata, and it’s absurd to claim that any one of those is what a point “is”—far more sensible to say that the point “is” its axiomata. Since those axiomata essentially consist of a list of valid string-rewriting rules (like (a,b)==(c,d) |- a==c), they are directly and explicitly syntactic.
Perhaps, indeed, there is a system more fundamental to mathematics than syntactics—but given that the classes of formal languages even over finite strings are “variously infinite” (since language classes are equivalent to computability classes, by something Curry-Howardesque), it seems to me that, by accepting variously infinite strings and running-times, one should find that all mathematical systems are inherently syntactic in nature.
Sadly this is difficult to prove, as all our existing formal methods are themselves explicitly syntactic and thus anything we can express formally by current means, we can express as syntax. If materialistic and mechanistic ideas about the nature of consciousness are valid, then in fact any mathematics conceivable by human thought are susceptible to syntactic interpretation (for, ultimately, there exists a “validity” predicate over mathematical deductions, and assuming that validity predicate is constant in all factors other than the mathematical deduction itself (which assumption I believe to hold, as I am a Platonist), that predicate has a syntactic expression though possibly one derived via the physics of the brain). This does not, however, rule out the possibility that there are things we might want to call ‘formal systems’ which are not syntactic in nature. It is my belief—and nothing more than that—that such things do not exist.
These might be stupid questions, but I’m encouraged by a recent post to ask them:
Doesn’t that apply to syntactic methods, too? It was my understanding that the symbols, strings and transformation rules don’t quite have a definition except for duck typing, i.e. “symbols are things that can be recognized as identical or distinct from each other”. (In fact, in at least one of the courses I took the teacher explicitly said something like “symbols are not defined”, though I don’t know if that is “common terminology” or just him or her being not quite sure how to explain their “abstractness”.)
And the phrase about ordered pairs applies just as well to ordered strings in syntax, doesn’t it? Isn’t the most common model of “strings” the Lisp-like pair-of-symbol-and-pair-of-symbol-and...?
Oh, wait a minute. Perhaps I got it. Is the following a fair summary of your attitude?
We can only reason rigorously by syntactic methods (at least, it’s the best we have). To reason about the “real world” we must model it syntactically, use syntactic methods for reasoning (produce allowed derivations), then “translate back” the conclusions to “real world” terms. The modelling part can be done in many ways—we can translate the properties of what we model in many ways—but a certain syntactic system has a unique non-ambiguous set of derivations, therefore the things we model from the “real world” are not quite real, only the syntax is.
I think that’s a very good summary indeed, in particular that the “unique non-ambiguous set of derivations” is what imbues the syntax with ‘reality’.
Symbols are indeed not defined, but the only means we have of duck-typing symbols is to do so symbolically (a symbol S is an object supporting an equality operator = with other symbols). You mention Lisp; the best mental model of symbols is Lisp gensyms (which, again, are objects supporting only one operator, equality).
conses of conses are indeed a common model of strings, but I’m not sure whether that matters—we’re interested in the syntax itself considered abstractly, rather than any representation of the syntax. Since ad-hoc infinite regress is not allowed, we must take something as primal (just as formal mathematics takes the ‘set’ as primal and constructs everything from set theory) and that is what I do with syntax.
As mathematics starts with axioms about sets and inference rules about sets, so I begin with meta-axioms about syntax and meta-inference rules about syntax. (I then—somewhat reflexively—consider meta²-axioms, then transfinitely induct. It’s a habit I’ve developed lately; a current project of mine is to work out how large a large ordinal kappa must be such that meta^kappa -syntax will prove the existence of ordinals larger than kappa, and then (by transfinite recursion shorter than kappa) prove the existence of [a given large cardinal, or the Von Neumann universe, or some other desired ‘big’ entity]. But that’s a topic for another post, I fear)