There’s some weirdnesses down that route—for example, it turns out that you can’t distinguish zero from nonzero, so the step function is actually uncomputable.
My contrarian claim is that everyone could live with the nameable numbers—that is, the numbers that can be pointed out using a finite number of books to describe them. People who really strongly care about the uncountability of the reals have a hard time coming up with a concrete example of what they’d miss.
My contrarian claim is that everyone could live with the nameable numbers
I don’t understand. Those also seem to fall prey to
it turns out that you can’t distinguish zero from nonzero, so the step function is actually uncomputable.
Also,
People who really strongly care about the uncountability of the reals have a hard time coming up with a concrete example of what they’d miss.
Lebesgue measure theory, Gal(C/R) = Z/2Z, and some pathological examples in the history of differential geometry without which the current definition of a manifold would have been much more difficult to ascertain.
Off the top of my head. There are certainly other things I would miss.
A mistake. I was thinking of C as the so-called “generic complex numbers.” You’re right that if you replace C with the algebraic closure of whatever countable model’s been dreamed up, then C = R[i] and that’s it.
Admittedly I’m only conjecturing that Gal(C/K) will be different for some K countable, but I think there’s good evidence in favor of it. After all, if K is the algebraic closure of Q, then Gal(C/K) is gigantic. It doesn’t seem likely that one could “fix” the other “degrees of freedom” with only countably many irrationals.
Those are theories, which are not generally lost if you switch the underlying definitions aptly—and they are sometimes improved (if the new definitions are better, or if the switch demonstrates an abstraction that was not previously known).
People can’t pick out specific examples of numbers that are lost by switching to using nameable numbers, they can only gesture at broad classes of numbers, like “0.10147829..., choosing subsequent digits according to no specific rule”. If you can describe a specific example (using Lebesgue measure theory if you like), then that description is a name for that number.
I’m not sure what a “nameable number” is. Whatever countable naming scheme you invent, I can “name” a number that’s outside it by the usual diagonal trick: it differs from your first nameable number in the first digit, and so on. (Note this doesn’t require choice, the procedure may be deterministic.) Switching from reals to nameable numbers seems to require adding more complexity than I’m comfortable with. Also, I enjoy having a notion of Martin-Löf random sequences and random reals, which doesn’t play nice with nameability.
By “nameable number” he seems to just mean a definable number—in general an object is called “definable” if there is some first-order property that it and only it satisfies. (Obviously, this dependson just what the surrouding theory is. Sounds like he means “definable in ZFC”.) The set of all definable objects is countable, for obvious reasons.
With this definition, your diagonal trick actually doesn’t work (which is good, because otherwise we’d have a paradox): Definability isn’t a notion expressible in the theory itself, only in the metatheory. Hence if you attempt to “define” something in terms of the set of all definable numbers, the result is only, uh, “metadefinable”. (I gave myself a real headache once over the idea of the “first undefinable ordinal”; thanks to JoshuaZ for pointing out to me why this isn’t a paradox.)
EDIT: I should point out, using definable numbers seems kind of awful, because they’re defined (sorry, metadefined :P ) in terms of logic-stuff that depends on the surrounding theory. Computable numbers, though more restrictive, might behave a little better, I expect...
EDIT Apr 30: Oops! Obviously definability depends only on the ambient language, not the actual ambient theory… that makes it rather less awful than I suggested.
The set of all definable objects is countable, for obvious reasons.
With this definition, your diagonal trick actually doesn’t work (which is good, because otherwise we’d have a paradox): Definability isn’t a notion expressible in the theory itself, only in the metatheory. Hence if you attempt to “define” something in terms of the set of all definable numbers, the result is only, uh, “metadefinable”.
We could similarly argue that the definable objects should be thought of as “meta-countable” rather than countable, right? The reals-implied-by-a-theory would always be uncountable-in-the-theory. (I’m tempted to imagine a world in which this ended the argument between constructivists and classicists, but realistically, one side or the other would end up feeling uneasy about such a compromise… more likely, both.)
I think you’re confusing levels here. When I spoke of “the surrounding theory” above, I didn’t mean the, uh, actual ambient theory. (Sorry about that—I may have gotten a little mixed up myself) And indeed, like I said, definability only depends on the language, not the theory. Well—of course it still depends on the actual ambient theory. But working internal to that (which I was doing), it only depends on the language. And then one can talk about the metalanguage, staying internal to the same ambient theory, etc… (mind you, all this is assuming that the ambient theory is powerful enough to talk about this sort of thing).
So at no point was I intending to vary the actual ambient theory, like you seem to be talking about.
Warning: I don’t quite understand just how logicians think of these things and so may be confused myself.
You’re correct to point out that I’m being too vague, and I’m making mistakes speaking as if nameable numbers constitute a set or a single alternative to the reals or the rationals.
However, I’ve been a consumer of theorems and proofs that casually use real numbers as if they’re lightweight objects. There is considerable effort involved to parse the underlying concepts out of the theorems and proofs, and re-formalize them using something reasonable (completions of the natural numbers under various operations like additive inverse, multiplicative inverse, square root, roots of polynomials in general, roots of differential equations in general). Those are all different sets of nameable numbers, and they’re all countable.
I would prefer that mathematicians routinely perceived “the reals” as a peculiar construction, and instead of throwing it in routinely when working on concepts in geometry or symmetry as the standard tool to modeling positions and distances, thought about what properties they actually need to get the job they’re doing done.
I know of one mathematician who thinks the real numbers are a peculiar construction in the context of topology because of the pathological things you can do with them — continuous nowhere-differentiable curves, space-filling curves, and so on. That’s why she studies motivic/A1 homotopy theory instead of classical homotopy theory; only polynomial functions are allowed.
I would prefer that mathematicians routinely perceived “the reals” as a peculiar construction, and instead of throwing it in routinely when working on concepts in geometry or symmetry as the standard tool to modeling positions and distances, thought about what properties they actually need to get the job they’re doing done.
Why is it that mathematicians so love the idea of doing their work blindfolded and with their hands tied behind their backs? Someone invented the reals. They’re awesome things. And people invented all sorts of techniques you can use the reals for. Make the most of it! Leave proving stuff about when reals are useful to and how such a peculiar construction can be derived and angsting about how deep and convoluted the basis must be to specialists in angsting about how deep and convoluted the basis for using reals is.
It’s the same as programmers insisting on introducing abstractions decoupling their code from the framework and libraries that they’re using; modularity to prevent dependency hell.
It’s the same as programmers insisting on introducing abstractions decoupling their code from the framework and libraries that they’re using; modularity to prevent dependency hell.
I think it is the same as programmers choosing to use languages with built in support for floating point calculations and importing standard math and stats libraries as appropriate. This is an alternative to rolling your own math functions to model your calculations based off integers or bytes.
Are you suggesting that modeling real numbers with floating point is a good practice?
Yes, it is a standard practice, and it may be the best compromise available for a programmer or a team on a limited time budget, but the enormous differences between real numbers and floating point numbers mean that everything that was true upstream in math-land regarding real numbers, becomes suspect and have to be re-checked, or transformed into corresponding not-quite-the-same theorems and proofs.
If we (downstream consumers of mathematics) could get mathematicians to use realistic primitives, then we could put a lot of numerical analysts out of work (free up their labor to do something more valuable).
Do you think some constructivist representation of numbers can do better than IEEE floats at removing the need for numerical analysis in most engineering tasks, while still staying fast enough? I’m veeeeery skeptical. It would be a huge breakthrough if it were true.
Yes, that’s my position. In fact, if you had hardware support for an intuitionistic / constructivist representation (intervals, perhaps), my bet would be that the circuits would be simpler than the floating-point hardware implementing the IEEE standard now.
I’m not an expert in the field, but it seems to me that intervals require strictly more complexity than IEEE floats (because you still need to do floating-point arithmetic on the endpoints) and will be unusable in many practical problems because they will get too wide. At least that’s the impression I got from reading a lot of Kahan. Or do you have some more clever scheme in mind?
Yes, if you have to embrace the same messy compromises, then I am mistaken. My belief (which is founded on studying logic and mathematics in college, and then software development after college) is that better foundations, with effort, show through to better implementations.
Are you suggesting that modeling real numbers with floating point is a good practice?
Good, certainly. Not a universally optimal practice though. There are times when unlimited precision is preferable, despite the higher computational overhead. There are libraries for that too.
So you would prefer that, instead of having one all-purpose number system that we can embed just about any kind of number we like into (not to mention do all kinds of other things with), we had a collection of distinct number systems, each used for some different ad-hoc purpose? How would this be an improvement?
You might consider the fact that, once upon a time, people actually started with the natural numbers—and then, over the ages, gradually felt the need to expand the system of numbers further and further until they ended up with the standard objects of modern mathematics, such as the real number system.
This was not a historical accident. Each new kind of number corresponds to a new kind of operation people wanted to do, that they couldn’t do with existing numbers. If you want to do subtraction, you need negative numbers; if you want to do division, you need rationals; and if you want to take limits of Cauchy sequences, then you need real numbers.
I don’t understand why this should cause computer-programming types any anxiety. A real number is not some kind of mysterious magical entity; it is just the result of applying an operation, call it “lim”, to an object called a “sequence” (a_n).
Real numbers are used because people want to be able to take limits (the usefulness of doing which was established decisively starting in the 17th century). So long as you allow the taking of limits, you are going to be working with the real numbers, or something equivalent. Yes, you could try to examine every particular limit that anyone has ever taken, and put them all into a special class (or several special classes), but that would be ad-hoc, ugly, and completely unnecessary.
I think you’re being a bit uncharitable here. You’ve just moved the infinitude/”mysterious magicalness” from talking about real numbers to talking about sequences of rational numbers, and it is in fact possible to classify sequences as definable vs. undefinable, as well as computable vs. uncomputable. (Though definability personally seems a bit ad-hoc to me, seeing as it depends on the ambient theory.) I don’t think it’s really extraordinary to claim that an undefinable or uncomputable sequence is a bit mysterious and possibly somehow unreal.
EDIT Apr 30: Oops! Obviously definability depends only on the ambient language, not the actual ambient theory… that makes it rather less ad-hoc than I suggested.
EDIT: I should probably add, though, that this whole argument seems mostly pointless. Aside from where cousin_it and Jonicholas got to talking about how to represent numbers in computers—and that seems to have hardly anything to do with actual real numbers seeing as those can’t be represented in computers—it seems to be basically just Jonicholas saying “I don’t like the reals for common constructivist reasons” and other people saying “Regardless, they’re valid objects of study”. Is there more to it than that?
I think you’re being a bit uncharitable here. You’ve just moved the infinitude/”mysterious magicalness” from talking about real numbers to talking about sequences of rational numbers
That was deliberate. (How was it uncharitable?)
I don’t think it’s really extraordinary to claim that an undefinable or uncomputable sequence is a bit mysterious and possibly somehow unreal.
It may not be extraordinary, but it’s still a confusion. A confusion that was resolved a century ago, when set theory was axiomatized, and the formalist view emerged. The Cantor/Kronecker debate is over: Cantor was right, Kronecker was wrong.
The source of this confusion seems to be a belief that correspondences between mathematical structures and the physical world are properties of the mathematical structures in question, rather than properties of the physical world. This is a kind of map/territory confusion.
My understanding is this thread was started, and to some extent kept rolling, by an unrelated thread, where I was behaving extremely hostile to EY, and several people went through all my back posts, looking for things to downvote. Patrick found something.
To put myself in the clear, I came across this old comment because I was looking through Doug S’s old posts (because I was idly curious). I replied to your comment because I’m ridiculously pedantic, a virtue in mathematics. I haven’t downvoted any of your comments and I harbor no feelings of antipathy towards you. Eliezer’s a big boy and he can take care of himself.
Now, back to the math debate! I don’t think it’s legitimate to conflate countable sets with sets with finite information content. Here are two counter examples. 1. The set of busy beaver numbers (a subset of the naturals). 2. The digits of Chaitin’s Omega [make an ordered pair of (digit, position) to represent these] (see http://en.wikipedia.org/wiki/Chaitin%27s_constant). It’s been proved that these sets can’t be constructed with any algorithm.
“The set of busy beaver numbers” communicates, points to, refers to, picks out, a concept in natural language that we can both talk about. It has finite information content (a finite sequence of ascii characters suffices to communicate it). An analogous sentence in a sufficiently formal language would still have finite information content.
Note that a description, even if it is finite, is not necessarily in the form that you might desire. Transforming the description “the sequence of the first ten natural numbers” into the format “{0, 1, 2, 3, 4, 5, 6, 7, 8, 9}” is easy, but the analogous transformation of “the first 10 busy beaver numbers” is very difficult if not impossible.
As nshepperd points out, an element of a countable set necessarily has finite information content (you can pick it out by providing an integer—“the fifth one” or whatever), while generic elements of uncountable sets cannot be picked out with finite information.
I think the finite information content comes from being an element of a countable set. Like every other real number, the digits of Chaitin’s constant themselves form a countable set (a sequence), while that set is a member of the uncountable R. Similarly, the busy beaver set is a subset of N, and drawn from the uncountable set 2^N.
Countable sets are useful (or rather, uncountable ones are inconvenient) because you can set up a normalized probability distribution over their contents. But… the set {Chaitin’s Constant} is countable (it has one element) but I still can’t get Omega’s digits. So there still seems to be a bit of mystery here.
Those are theories, which are not generally lost...
I really wish I had the time to explicitly write out the reasons why I believe these examples are compelling reasons to use the usual model of the real numbers. I tried, but I’ve already spent too long and I doubt they would convince you anyway.
People can’t pick out specific examples of numbers that are lost by switching to using nameable numbers,
So? Omega could obliterate 99% of the particles in the known universe, and I wouldn’t be able to name a particular one. If it turns out in the future that these nameable numbers have nice theoretic properties, sure. The effort to rebuild the usual theory doesn’t seem to be worth the benefit of getting rid of uncountability. (Or more precisely, one source of uncountability.)
I think I’ve spent enough time procrastinating on this topic. I don’t see it going anywhere productive.
Suppose someone played idly manipulating small objects, like bottlecaps, in the real world. Suppose they formed (inductively) some hypotheses, picked some axioms and a formal system, derived the observed truths as a consequence of the axioms, and went on to derive some predictions regarding particular long or unusual manipulations of bottlecaps.
If the proofs are correct, the conclusions are true regardless of the outcome of experiments. If you believe that mathematics is self-hosting; interesting and relevant and valuable in itself, that may be sufficient for you. However, you might alternatively take the position that contradictions with experiment would render the previous axioms, theorems and proofs less interesting because they are less relevant.
Generic real numbers, because of their infinite information content, are not good models of physical things (positions, distances, velocities, energies) that a casual consumer of mathematics might think they’re natural models of. If you built the real numbers from first-order ZFC axioms, then they do have (via Lowenheim-Skolem) some finite-information-content correspondences—however, those objects look like abstract syntax trees, ramified with details that act as obstacles to finding an analogous structure in the real world.
Of course, whether a number is definable or not depends on the surrounding theory. Stick to first-order theory of the reals and only algebraic numbers will be definable! Definable in ZF? Or what?
EDIT Apr 30: Oops! Obviously definability depends only on the ambient language, not the actual ambient theory… no difference here between ZF and ZFC...
There’s some weirdnesses down that route—for example, it turns out that you can’t distinguish zero from nonzero, so the step function is actually uncomputable.
My contrarian claim is that everyone could live with the nameable numbers—that is, the numbers that can be pointed out using a finite number of books to describe them. People who really strongly care about the uncountability of the reals have a hard time coming up with a concrete example of what they’d miss.
I don’t understand. Those also seem to fall prey to
Also,
Lebesgue measure theory, Gal(C/R) = Z/2Z, and some pathological examples in the history of differential geometry without which the current definition of a manifold would have been much more difficult to ascertain.
Off the top of my head. There are certainly other things I would miss.
I’m confused; this is true for any real closed field. What are you getting at with this?
A mistake. I was thinking of C as the so-called “generic complex numbers.” You’re right that if you replace C with the algebraic closure of whatever countable model’s been dreamed up, then C = R[i] and that’s it.
Admittedly I’m only conjecturing that Gal(C/K) will be different for some K countable, but I think there’s good evidence in favor of it. After all, if K is the algebraic closure of Q, then Gal(C/K) is gigantic. It doesn’t seem likely that one could “fix” the other “degrees of freedom” with only countably many irrationals.
Those are theories, which are not generally lost if you switch the underlying definitions aptly—and they are sometimes improved (if the new definitions are better, or if the switch demonstrates an abstraction that was not previously known).
People can’t pick out specific examples of numbers that are lost by switching to using nameable numbers, they can only gesture at broad classes of numbers, like “0.10147829..., choosing subsequent digits according to no specific rule”. If you can describe a specific example (using Lebesgue measure theory if you like), then that description is a name for that number.
I’m not sure what a “nameable number” is. Whatever countable naming scheme you invent, I can “name” a number that’s outside it by the usual diagonal trick: it differs from your first nameable number in the first digit, and so on. (Note this doesn’t require choice, the procedure may be deterministic.) Switching from reals to nameable numbers seems to require adding more complexity than I’m comfortable with. Also, I enjoy having a notion of Martin-Löf random sequences and random reals, which doesn’t play nice with nameability.
By “nameable number” he seems to just mean a definable number—in general an object is called “definable” if there is some first-order property that it and only it satisfies. (Obviously, this dependson just what the surrouding theory is. Sounds like he means “definable in ZFC”.) The set of all definable objects is countable, for obvious reasons.
With this definition, your diagonal trick actually doesn’t work (which is good, because otherwise we’d have a paradox): Definability isn’t a notion expressible in the theory itself, only in the metatheory. Hence if you attempt to “define” something in terms of the set of all definable numbers, the result is only, uh, “metadefinable”. (I gave myself a real headache once over the idea of the “first undefinable ordinal”; thanks to JoshuaZ for pointing out to me why this isn’t a paradox.)
EDIT: I should point out, using definable numbers seems kind of awful, because they’re defined (sorry, metadefined :P ) in terms of logic-stuff that depends on the surrounding theory. Computable numbers, though more restrictive, might behave a little better, I expect...
EDIT Apr 30: Oops! Obviously definability depends only on the ambient language, not the actual ambient theory… that makes it rather less awful than I suggested.
We could similarly argue that the definable objects should be thought of as “meta-countable” rather than countable, right? The reals-implied-by-a-theory would always be uncountable-in-the-theory. (I’m tempted to imagine a world in which this ended the argument between constructivists and classicists, but realistically, one side or the other would end up feeling uneasy about such a compromise… more likely, both.)
I think you’re confusing levels here. When I spoke of “the surrounding theory” above, I didn’t mean the, uh, actual ambient theory. (Sorry about that—I may have gotten a little mixed up myself) And indeed, like I said, definability only depends on the language, not the theory. Well—of course it still depends on the actual ambient theory. But working internal to that (which I was doing), it only depends on the language. And then one can talk about the metalanguage, staying internal to the same ambient theory, etc… (mind you, all this is assuming that the ambient theory is powerful enough to talk about this sort of thing).
So at no point was I intending to vary the actual ambient theory, like you seem to be talking about.
Warning: I don’t quite understand just how logicians think of these things and so may be confused myself.
You’re correct to point out that I’m being too vague, and I’m making mistakes speaking as if nameable numbers constitute a set or a single alternative to the reals or the rationals.
However, I’ve been a consumer of theorems and proofs that casually use real numbers as if they’re lightweight objects. There is considerable effort involved to parse the underlying concepts out of the theorems and proofs, and re-formalize them using something reasonable (completions of the natural numbers under various operations like additive inverse, multiplicative inverse, square root, roots of polynomials in general, roots of differential equations in general). Those are all different sets of nameable numbers, and they’re all countable.
I would prefer that mathematicians routinely perceived “the reals” as a peculiar construction, and instead of throwing it in routinely when working on concepts in geometry or symmetry as the standard tool to modeling positions and distances, thought about what properties they actually need to get the job they’re doing done.
I know of one mathematician who thinks the real numbers are a peculiar construction in the context of topology because of the pathological things you can do with them — continuous nowhere-differentiable curves, space-filling curves, and so on. That’s why she studies motivic/A1 homotopy theory instead of classical homotopy theory; only polynomial functions are allowed.
Why is it that mathematicians so love the idea of doing their work blindfolded and with their hands tied behind their backs? Someone invented the reals. They’re awesome things. And people invented all sorts of techniques you can use the reals for. Make the most of it! Leave proving stuff about when reals are useful to and how such a peculiar construction can be derived and angsting about how deep and convoluted the basis must be to specialists in angsting about how deep and convoluted the basis for using reals is.
It’s the same as programmers insisting on introducing abstractions decoupling their code from the framework and libraries that they’re using; modularity to prevent dependency hell.
I think it is the same as programmers choosing to use languages with built in support for floating point calculations and importing standard math and stats libraries as appropriate. This is an alternative to rolling your own math functions to model your calculations based off integers or bytes.
Your shot.
Are you suggesting that modeling real numbers with floating point is a good practice?
Yes, it is a standard practice, and it may be the best compromise available for a programmer or a team on a limited time budget, but the enormous differences between real numbers and floating point numbers mean that everything that was true upstream in math-land regarding real numbers, becomes suspect and have to be re-checked, or transformed into corresponding not-quite-the-same theorems and proofs.
If we (downstream consumers of mathematics) could get mathematicians to use realistic primitives, then we could put a lot of numerical analysts out of work (free up their labor to do something more valuable).
Do you think some constructivist representation of numbers can do better than IEEE floats at removing the need for numerical analysis in most engineering tasks, while still staying fast enough? I’m veeeeery skeptical. It would be a huge breakthrough if it were true.
Yes, that’s my position. In fact, if you had hardware support for an intuitionistic / constructivist representation (intervals, perhaps), my bet would be that the circuits would be simpler than the floating-point hardware implementing the IEEE standard now.
I’m not an expert in the field, but it seems to me that intervals require strictly more complexity than IEEE floats (because you still need to do floating-point arithmetic on the endpoints) and will be unusable in many practical problems because they will get too wide. At least that’s the impression I got from reading a lot of Kahan. Or do you have some more clever scheme in mind?
Yes, if you have to embrace the same messy compromises, then I am mistaken. My belief (which is founded on studying logic and mathematics in college, and then software development after college) is that better foundations, with effort, show through to better implementations.
Good, certainly. Not a universally optimal practice though. There are times when unlimited precision is preferable, despite the higher computational overhead. There are libraries for that too.
So you would prefer that, instead of having one all-purpose number system that we can embed just about any kind of number we like into (not to mention do all kinds of other things with), we had a collection of distinct number systems, each used for some different ad-hoc purpose? How would this be an improvement?
You might consider the fact that, once upon a time, people actually started with the natural numbers—and then, over the ages, gradually felt the need to expand the system of numbers further and further until they ended up with the standard objects of modern mathematics, such as the real number system.
This was not a historical accident. Each new kind of number corresponds to a new kind of operation people wanted to do, that they couldn’t do with existing numbers. If you want to do subtraction, you need negative numbers; if you want to do division, you need rationals; and if you want to take limits of Cauchy sequences, then you need real numbers.
I don’t understand why this should cause computer-programming types any anxiety. A real number is not some kind of mysterious magical entity; it is just the result of applying an operation, call it “lim”, to an object called a “sequence” (a_n).
Real numbers are used because people want to be able to take limits (the usefulness of doing which was established decisively starting in the 17th century). So long as you allow the taking of limits, you are going to be working with the real numbers, or something equivalent. Yes, you could try to examine every particular limit that anyone has ever taken, and put them all into a special class (or several special classes), but that would be ad-hoc, ugly, and completely unnecessary.
I think you’re being a bit uncharitable here. You’ve just moved the infinitude/”mysterious magicalness” from talking about real numbers to talking about sequences of rational numbers, and it is in fact possible to classify sequences as definable vs. undefinable, as well as computable vs. uncomputable. (Though definability personally seems a bit ad-hoc to me, seeing as it depends on the ambient theory.) I don’t think it’s really extraordinary to claim that an undefinable or uncomputable sequence is a bit mysterious and possibly somehow unreal.
EDIT Apr 30: Oops! Obviously definability depends only on the ambient language, not the actual ambient theory… that makes it rather less ad-hoc than I suggested.
EDIT: I should probably add, though, that this whole argument seems mostly pointless. Aside from where cousin_it and Jonicholas got to talking about how to represent numbers in computers—and that seems to have hardly anything to do with actual real numbers seeing as those can’t be represented in computers—it seems to be basically just Jonicholas saying “I don’t like the reals for common constructivist reasons” and other people saying “Regardless, they’re valid objects of study”. Is there more to it than that?
That was deliberate. (How was it uncharitable?)
It may not be extraordinary, but it’s still a confusion. A confusion that was resolved a century ago, when set theory was axiomatized, and the formalist view emerged. The Cantor/Kronecker debate is over: Cantor was right, Kronecker was wrong.
The source of this confusion seems to be a belief that correspondences between mathematical structures and the physical world are properties of the mathematical structures in question, rather than properties of the physical world. This is a kind of map/territory confusion.
A good point.
Sorry, uncharitable was the wrong word there. I meant you didn’t address the actual apparent problem. Your new comment does.
No, there’s nothing substantive beyond that.
My understanding is this thread was started, and to some extent kept rolling, by an unrelated thread, where I was behaving extremely hostile to EY, and several people went through all my back posts, looking for things to downvote. Patrick found something.
To put myself in the clear, I came across this old comment because I was looking through Doug S’s old posts (because I was idly curious). I replied to your comment because I’m ridiculously pedantic, a virtue in mathematics. I haven’t downvoted any of your comments and I harbor no feelings of antipathy towards you. Eliezer’s a big boy and he can take care of himself.
Now, back to the math debate! I don’t think it’s legitimate to conflate countable sets with sets with finite information content. Here are two counter examples. 1. The set of busy beaver numbers (a subset of the naturals). 2. The digits of Chaitin’s Omega [make an ordered pair of (digit, position) to represent these] (see http://en.wikipedia.org/wiki/Chaitin%27s_constant). It’s been proved that these sets can’t be constructed with any algorithm.
“The set of busy beaver numbers” communicates, points to, refers to, picks out, a concept in natural language that we can both talk about. It has finite information content (a finite sequence of ascii characters suffices to communicate it). An analogous sentence in a sufficiently formal language would still have finite information content.
Note that a description, even if it is finite, is not necessarily in the form that you might desire. Transforming the description “the sequence of the first ten natural numbers” into the format “{0, 1, 2, 3, 4, 5, 6, 7, 8, 9}” is easy, but the analogous transformation of “the first 10 busy beaver numbers” is very difficult if not impossible.
As nshepperd points out, an element of a countable set necessarily has finite information content (you can pick it out by providing an integer—“the fifth one” or whatever), while generic elements of uncountable sets cannot be picked out with finite information.
I think the finite information content comes from being an element of a countable set. Like every other real number, the digits of Chaitin’s constant themselves form a countable set (a sequence), while that set is a member of the uncountable R. Similarly, the busy beaver set is a subset of N, and drawn from the uncountable set 2^N.
Countable sets are useful (or rather, uncountable ones are inconvenient) because you can set up a normalized probability distribution over their contents. But… the set {Chaitin’s Constant} is countable (it has one element) but I still can’t get Omega’s digits. So there still seems to be a bit of mystery here.
I really wish I had the time to explicitly write out the reasons why I believe these examples are compelling reasons to use the usual model of the real numbers. I tried, but I’ve already spent too long and I doubt they would convince you anyway.
So? Omega could obliterate 99% of the particles in the known universe, and I wouldn’t be able to name a particular one. If it turns out in the future that these nameable numbers have nice theoretic properties, sure. The effort to rebuild the usual theory doesn’t seem to be worth the benefit of getting rid of uncountability. (Or more precisely, one source of uncountability.)
I think I’ve spent enough time procrastinating on this topic. I don’t see it going anywhere productive.
Suppose someone played idly manipulating small objects, like bottlecaps, in the real world. Suppose they formed (inductively) some hypotheses, picked some axioms and a formal system, derived the observed truths as a consequence of the axioms, and went on to derive some predictions regarding particular long or unusual manipulations of bottlecaps.
If the proofs are correct, the conclusions are true regardless of the outcome of experiments. If you believe that mathematics is self-hosting; interesting and relevant and valuable in itself, that may be sufficient for you. However, you might alternatively take the position that contradictions with experiment would render the previous axioms, theorems and proofs less interesting because they are less relevant.
Generic real numbers, because of their infinite information content, are not good models of physical things (positions, distances, velocities, energies) that a casual consumer of mathematics might think they’re natural models of. If you built the real numbers from first-order ZFC axioms, then they do have (via Lowenheim-Skolem) some finite-information-content correspondences—however, those objects look like abstract syntax trees, ramified with details that act as obstacles to finding an analogous structure in the real world.
Of course, whether a number is definable or not depends on the surrounding theory. Stick to first-order theory of the reals and only algebraic numbers will be definable! Definable in ZF? Or what?
EDIT Apr 30: Oops! Obviously definability depends only on the ambient language, not the actual ambient theory… no difference here between ZF and ZFC...