That was my (charitable) interpretation too, until, to my dismay, Eliezer confirmed (at a meetup) that he had “leanings” in the direction of constructivism/intuitionism—apparently not quite aware of the discredited status of such views in mathematics.
And indeed, when I asked Eliezer where he thinks the standard proof of infinite sets goes wrong, he pointed to the law of the excluded middle.
His idol E.T. Jaynes may be to blame, who in PTLS explicitly allied himself with Kronecker, Brouwer, and Poincaré as opposed to Cantor, Hilbert, and Bourbaki—once again apparently not understanding the settled status of that debate on the side of Cantor et al. One is inclined to suspect this is where Eliezer picked such attitudes up.
Eliezer confirmed (at a meetup) that he had “leanings” in the direction of constructivism/intuitionism—apparently not quite aware of the discredited status of such views in mathematics.
Can you elaborate on constructivism, intuitionism, and their discrediting? And what that has to do with the law of the excluded middle? I thought constructivism and intuitionism were epistemological theories, and it isn’t immediately obvious how they apply to mathematics. Does a constructivist mathematician not believe in proof by contradiction?
Also, I don’t know what you mean by “the standard proof of infinite sets”.
I think komponisto is a little confused about the discredited status of intuitionism, and you’re a little confused about math vs epistemology. Here’s a short sweet introduction to intuitionist math and when it’s useful, much in the spirit of Eliezer’s intuitive explanation of Bayes. Scroll down for the connection between intuitionism and infinitesimals—that’s the most exciting bit.
PS: that whole blog is pretty awesome—I got turned on to it by the post “Seemingly impossible functional programs” which demonstrates e.g. how the problem of determining equality of two black-box functions from reals in [0, 1] to booleans turns out to be computationally decidable in finite time (complete with comparison algorithm in Haskell).
I think komponisto is a little confused about the discredited status of intuitionism
Not at all. Precious few are the mathematicians who take the views of Kronecker or Brouwer seriously today. I mean, sure, some historically knowledgeable mathematicians will gladly engage in bull sessions about the traditional “three views” in the philosophy of mathematics (Platonism, intuitionism, and formalism), during which they treat them as if on par with each other. But then they get up the next day and write papers that depend on the Axiom of Choice without batting an eye.
The philosophical parts of intuitionism are mostly useless, but it contains useful mathematical parts like Martin-Löf type theory used in e.g. the Coq proof assistant. Not sure if this is relevant to Eliezer’s “leanings” which started the discussion, but still.
Right, but in this context I wouldn’t label such “mathematical parts” as part of intuitionism per se. What I’m talking about here is a certain school of thought that holds that mainstream (infinitary, nonconstructive) mathematics is in some important sense erroneous. This is a belief that Eliezer has been hitherto unwilling to disclaim—for no reason that I can fathom other than a sense of warm glow around E.T. Jaynes.
(Needless to say, Eliezer is welcome to set the record straight on this any time he wishes...)
I do not understand what the word “erroneous” is supposed to mean in this context.
For the sake of argument, I will go ahead and ask what sort of nonconstructive entities you think an AI needs to reason about, in order to function properly.
Some senses of “erroneous” that might be involved here include (this list is not necessarily intended to be exhaustive):
Mathematically incorrect—i.e. the proofs contain actual logical inconsistencies. This was argued by some early skeptics (such as Kronecker) but is basically indefensible ever since the formulation of axiomatic set theory and results such as Gödel’s on the consistency of the Axiom of Choice. Such a person would have to actually believe the ZF axioms are inconsistent, and I am aware of no plausible argument for this.
Making claims that are epistemologically indefensible, even if possibly true. E.g., maybe there does exist a well-ordering of the reals, but mere mortals are in no position to assert that such a thing exists. Again, axiomatic formalization should have meant the end of this as a plausible stance.
Irrelevant or uninteresing as an area of research because of a “lack of correspondence” with “reality” or “the physical world”. In order to be consistent, a person subscribing to this view would have to repudiate the whole of pure mathematics as an enterprise. If, as is more common, the person is selectively criticizing certain parts of mathematics, then they are almost certainly suffering from map-territory confusion. Mathematics is not physics; the map is not the territory. It is not ordained or programmed into the universe that positive integers must refer specifically to numbers of elementary particles, or some such, any more than the symbolic conventions of your atlas are programmed into the Earth. Hence one cannot make a leap e.g. from the existence of a finite number of elementary particles to the theoretical adequacy of finitely many numbers. To do so would be to prematurely circumscribe the nature of mathematical models of the physical world. Any criticism of a particular area of mathematics as “unconnected to reality” necessarily has to be made from the standpoint of a particular model of reality. But part (perhaps a large part) of the point of doing pure mathematics (besides the fact that it’s fun, of course), is to prepare for the necessity, encountered time and time again in the history of our species, of upgrading—and thus changing—our very model. Not just the model itself but the ways in which mathematical ideas are used in the model. This has often happened in ways that (at least at the time) would have seemed very surprising.
For the sake of argument, I will go ahead and ask what sort of nonconstructive entities you think an AI needs to reason about, in order to function properly.
Well, if the AI is doing mathematics, then it needs to reason about the very same entities that human mathematicians reason about.
Maybe that sounds like begging the question, because you could ask why humans themselves need to reason about those entities (which is kind of the whole point here). But in that case I’m not sure what you’re getting at by switching from humans to AIs.
Do you perhaps mean to ask something like: “What kind of mathematical entities will be needed in order to formulate the most fundamental physical laws?”
Why do you think that the axiomatic formulation of ZFC “should have meant an end” to the stance that ZFC makes claims that are epistemologically indefensible? Just because I can formalize a statement does not make that statement true, even if it is consistent. Many people (including me and apparently Eliezer, though I would guess that my views are different from his) do not think that the axioms of ZFC are self-evident truths.
In general, I find the argument for Platonism/the validity of ZFC based on common acceptance to be problematic because I just don’t think that most people think about these issues seriously. It is a consensus of convenience and inertia. Also, many mathematicians are not Platonists at all but rather formalists—and constructivism is closer to formalism than Platonism is.
It’s rude to start refuting an idea before you’ve finished defining it.
One of these things is not like the others. There’s nothing wrong with giving us a history of constructive thinking, and providing us with reasons why outdated versions of the theory were found wanting. It’s good style to use parallel construction to build rhetorical momentum. It is terribly dishonest to do both at the same time—it creates the impression that the subjective reasons you give for dismissing point 3 have weight equal to the objective reasons history has given for dismissing points 1 and 2.
Your talk in point 3 about “map-territory confusion” is very strange. Mathematics is all in your head. It’s all map, no territory. You seem to be claiming that constructivsts are outside of the mathematical mainstream because they want to bend theory in the direction of a preferred outcome. You then claim that this is outside of the bounds of acceptable mathematical thinking, So what’s wrong with reasoning like this:
“Nobody really likes all of the consequences of the Axiom of Choice, but most people seem willing to put up with its bad behavior because some of the abstractions it enables—like the Real Numbers—are just so damn useful. I wonder how many of the useful properties of the Real Numbers I could capture by building up from (a possibly weakened version of) ZF set theory and a weakened version of the Axiom of Choice?”
I’m sorry, but I don’t think there was anything remotely “rude” or “terribly dishonest” about my previous comment. If you think I am mistaken about anything I said, just explain why. Criticizing my rhetorical style and accusing me of violating social norms is not something I find helpful.
Quite frankly, I also find criticisms of the form “you sound more confident than you should be” rather annoying. E.g:
it creates the impression that the subjective reasons you give for dismissing point 3 have weight equal to the objective reasons history has given for dismissing points 1 and 2.
That’s because for me, the reasons I gave in point 3 do indeed have similar weight to the reasons I gave in points 1 and 2. If you disagree, by all means say so. But to rise up in indignation over the very listing of my reasons—is that really necessary? Would you seriously have preferred that I just list the bullet points without explaining what I thought?
So what’s wrong with reasoning like this:
Nothing at all, except for the false claim that nobody likes the consequences of the Axiom of Choice. (Some people do like them, and why shouldn’t they?)
The target of my critique—and I thought I made this clear in my response to cousin_it—is the critique of mainstream mathematical reasoning, not the research program of exploring different axiomatic set theories. The latter could easily be done by someone fully on board with the soundness of traditional mathematics. Just as it is unnecessary to doubt the correctness of Euclid’s arguments in order to be interested in non-Euclidean geometry.
Criticizing my rhetorical style and accusing me of violating social norms is not something I find helpful.
Until very recently, I held a similar attitude. I think it’s common to be annoyed by this sort of criticism… it’s distracting and rarely relevant.
That said, it seems to me that the above “rarely” isn’t rare enough. If you’re inadvertently violating a social norm, wouldn’t you like to know? If you already know, what does it matter to have it pointed out to you? Just ignore the redundant information.
I think this principle extends to a lot of speculative or subjective criticism. The potential value of just one accurate critique taken to heart seems quite high. Does such criticism have a positive expected value? That depends on the overall cost of the associated inaccurate or redundant statements (i.e., the vast majority of them). It seems this cost can be made to approach zero by just not taking them personally and ignoring them when they’re misguided, so long as they’re sufficiently disentangled from “object-level” statements.
Aaaand this makes me curious. Eliezer, for the sake of argument, do you really think we’d do good by prohibiting the AI from using reductio ad absurdum?
Okay. I have several sources of skepticism about infinite sets. One has to do with my never having observed a large cardinal. One has to do with the inability of first-order logic to discriminate different sizes of infinite set (any countably infinite set of first-order statements that has an infinite model has a countably infinite model—i.e. a first-order theory of e.g. the real numbers has countable models as well as the canonical uncountable model) and that higher-order logic proves exactly what a many-sorted first-order logic proves, no more and no less. One has to do with the breakdown of many finite operations, such as size comparison, in a way that e.g. prevents me from comparing two “infinite” collections of observers to determine anthropic probabilities.
The chief argument against my skepticism has to do with the apparent physical existence of minimal closures and continuous quantities, two things that cannot be defined in first-order logic but that would, apparently, if you take higher-order logic at face value, suffice respectively to specify the existence of a unique infinite collection of natural numbers and a unique infinite collection of points on a line.
Another point against my skepticism is that first-order set theory proper and not just first-order Peano Arithmetic is useful to prove e.g. the totalness of the Goodstein function, but while a convenient proof uses infinite ordinals, it’s not clear that you couldn’t build an AI that got by just as well on computable functions without having to think about infinite sets.
My position can be summed up as follows: I suspect that an AI does not have to reason about large infinities, or possibly any infinities at all, in order to deal with reality.
One has to do with the breakdown of many finite operations, such as size comparison, in a way that e.g. prevents me from comparing two “infinite” collections of observers to determine anthropic probabilities.
Born probabilities seem to fit your bill perfectly. :-)
I reject infinity as anything more than “a number that is big enough for its smallness to be negligible for the purpose at hand.”
My reason for rejecting infinity in it’s usual sense is very simple: it doesn’t communicate anything. Here you said (about communication) “When you each understand what is in the other’s mind, you are done.” In order to communicate, there has to be something in your mind in the first place, but don’t we all agree infinity can’t ever be in your mind? If so, how can it be communicated?
Edit to clarify: I worded that poorly. What I mean to ask is, Don’t we all agree that we cannot imagine infinity (other than imagine something like, say, a video that seems to never end, or a line that is way longer than you’d ever seem to need)? If you can imagine it, please just tell me how you do it!
Also, “reject” is too strong a word; I merely await a coherent definition of “infinity” that differs from mine.
don’t we all agree infinity can’t ever be in your mind?
Yes but it doesn’t matter. The moon can’t literally be in your mind either. Since your mind is in your brain, then if the moon were in your mind it would be in your brain, and I don’t even know what would happen first: your brain would be crushed against your skull (which would in turn explode), and the weight of the moon would crush you flat (and also destroy whatever continent you were on and then very possibly the whole world).
But you can still think about the moon without it literally having to be in your mind.
I can visualize the moon. If I say the word “moon,” and you get a picture of the moon in your mind—or some such thing—then I feel like we’re on the same page. But I can’t visualize “infinity,” or when I do it turns out as above. If I say the word “infinity” and you visualize (or taste, or whatever) something similar, I feel like we’ve communicated, but then you would agree with my first line in the above post. Since you don’t agree, when I say “infinity,” you must get some very different representation in your mind. Does it do the concept any more justice that my representations? If so, please tell me how to experience it.
We refer to things with signs. The signs don’t have to be visual representations. We can think about things by employing the signs which refer to them. What makes the sign for (say) countable infinity refer to it is the way that the sign is used in a mathematical theory (countable infinity being a mathematical concept). Learn the math, and you will learn the concept.
Compare to this: you probably cannot visualize the number 845,264,087,843,113. You can of course visualize the sign I just now wrote for it, but you cannot visualize the number itself (by, for example, visualizing a large bowl with exactly that number of pebbles in it). What you can do is visualize a bowl with a vast number of pebbles in it, while thinking the thought, “this imagined bowl has precisely 845,264,087,843,113 pebbles in it.” Here you would be relying entirely on the sign to make your mental picture into a picture of exactly that number of pebbles. In fact you could dispense with the picture entirely and keep only the sign, and you would successfully be thinking about that number, purely by employing its sign in your thoughts. Note that you can do operations on that sign, such as subtracting another number by manipulating the two signs via the method you learned as a child. So you have mastered (some of) the relevant math, so the sign, as you employ it, really does refer to that number.
Well I agree that I can think just with verbal signs, so long as the verbal sentences or symbolic statements mean something to me (could potentially pay rent*) or the symbols are eventually converted into some other representation that means something to me.
I can think with the infinity symbol, which doesn’t mean anything to me (unless it means what I first said above: in short, “way big enough”), and then later convert the result back into symbols that do mean something to me. So I’m fine with using infinity in math, as long as it’s just a formalism (a symbol) like that.
But here is one reason why I want to object to the “realist” interpretation of infinity via this argument that it’s just a formalism and has no physical or experiential interpretation, besides “way big enough”: The Christian god, for example, is supposed to be infinite this and infinite that. This isn’t intended—AFAIK—as a formalism nor as an approximation (“way powerful enough”), but as an actual statement. Once you realize this really isn’t communicating anything, theological noncognitivism is a snap: the entity in question is shown to be a mere symbol, if anything. (Or, to be completely fair, God could just be a really powerful, really smart dude.) I know there are other major problems with theology, but this approach seems cleanest.
*ETA: This needs an example. Say I have a verbal belief or get trusted verbal data, like a close friend says in a serious and urgent voice, “(You’d better) duck!” The sentence means something to me directly: it means I’ll be better off taking a certain action. That pays rent because I don’t get hit in the head by a snowball or something. To make it into thinking in words (just transforming sentences around using my knowledge of English grammar), my friend might have been a prankster and told me something of the form, “If not A, then not B. If C, then B. If A, then you’d better duck. By the way, C.” Then I’d have to do the semantic transforms to derive the conclusion: “(I’d better) duck!” which means something to me.
I did say I’m fine with using infinity in math as a formalism, and also that statements using it could be reconverted (using mathematical operations) into ones that do pay rent. It’s just that the symbol infinity doesn’t immediately mean anything to me (except my original definition).
But I am interested in the separate idea that limits employ infinite sequences. It of course depends on the definition of limit. The epsilon-delta definition in my highschool textbook didn’t use infinite sequences, except in the sense of “you could go on giving me epsilons and I could go on giving you deltas.” That definition of infinity (if we’ll call it that) directly means something to me: “this process of back and forth is not going to end.” There is also the infinitesimal approach of nonstandard analysis, but see my reply to ata for that.
statements using it could be reconverted (using mathematical operations) into ones that do pay rent.
If statement A can be converted into statement B and statement B pays rent, then statement A pays rent.
It’s just that the symbol infinity doesn’t immediately mean anything to me (except my original definition).
Your original definition:
I reject infinity as anything more than “a number that is big enough for it’s smallness to be negligible for the purpose at hand.”
Is a terrible one for most purposes, because for them, no matter how big you make a finite number, it won’t serve the purpose.
Also, meaning is not immediate. Your sense that a word means something may arise with no perceptible delay, but meaning takes time. To use the point you raised, meaning pays rent and rent takes time to pay. Anticipated sensory experiences are scheduled occur in the future, i.e. after a delay. The immediate sense that a word means something is not, itself, the meaning, but only a reliable intuition that the word means something. If you study the mathematics of infinity, then you will likewise develop the intuition that infinity means something.
The epsilon-delta definition in my highschool textbook didn’t use infinite sequences, except in the sense of “you could go on giving me epsilons and I could go on giving you deltas.”
The epsilon delta definition is meaningful because of the infinite divisibility of the reals.
That definition of infinity (if we’ll call it that) directly means something to me: “this process of back and forth is not going to end.”
Unlike your original definition, this is a good definition (at least, once it’s been appropriately cleaned up and made precise).
statements using it could be reconverted (using mathematical operations) into ones that do pay rent.
If statement A can be converted into statement B and statement B pays rent, then statement A pays rent.
Only if the mathematical operation is performed by pure logically entailment, which—if a meaningless definition of infinity is used and that definition is scrapped in the final statement—it would not be. We will just go on about what constitutes a mathematical operation and such, but all I am saying is that if there is a formal manipulation rule that says something like, “You can change the infinity symbol to ‘big enough’* here” (note: this is not logical entailment) then I have no objection to the use of the formal symbol “infinity.”
*ETA: or just use the definition we agree on instead. This is a minor technical point, hard to explain, and I’m not doing a good job of it. I’ll leave it in just in case you started a reply to it already, but I don’t think it will help many people understand what I’m talking about, rather than just reading the parts below this.
I reject infinity as anything more than “a number that is big enough for it’s smallness to be negligible for the purpose at hand.”
Is a terrible one for most purposes, because for them, no matter how big you make a finite number, it won’t serve the purpose.
For example? Although, if we agree on the definition below, there’s maybe no point.
The immediate sense that a word means something is not, itself, the meaning, but only a reliable intuition that the word means something.
That’s why I said “could potentially pay rent.”
The epsilon-delta definition in my highschool textbook didn’t use infinite sequences, except in the sense of “you could go on giving me epsilons and I could go on giving you deltas.”
But then it did use infinite sequences.
That definition of infinity (if we’ll call it that) directly means something to me: “this process of back and forth is not going to end.”
Unlike your original definition, this is a good definition (at least, once it’s been appropriately cleaned up and made precise).
Looks like we’re in agreement, then, and I am not a finitist if that is what is meant by infinite sequences.
But then, to take it back to the original, I still agree with Eliezer that an “infinite set” is a dubious concept. Infinite as an adverb I can take (describes a process that isn’t going to end (in the sense that expecting it to end never pays rent)); infinite as an adjective, and infinity the noun, seem like reification: Harmless in some contexts, but harmful in others.
For example? Although, if we agree on the definition below, there’s maybe no point.
A very early appearance of infinity is the proof that there are infinitely many primes. It is most certainly not a proof that there is a very large but finite number of primes.
I can agree with “there are infinitely many primes” if I interpret it as something like “if I ever expect to run out of primes, that belief won’t pay rent.”
In this case, and in most cases in mathematics, these statements may look and operate the same—except mine might be slower and harder to work with. So why do I insist on it? I’m happy to work with infinities for regular math stuff, but there are some cases where it does matter, and these might all be outside of pure math. But in applied math there can be problems if infinity is taken seriously as a static concept rather than as a process where the expectation that it will end will never pay rent.
Like if someone said, “Black holes have infinite density,” I would have to ask for clarification. Can it be put into a verbal form at least? How would it pay rent in terms of measurements? That kind of thing.
Like if someone said, “Black holes have infinite density,” I would have to ask for clarification.
Actually, the way I learned calculus, allowable values of functions are real (or complex), not infinite. The value of the function 1/x at x=0 is not “infinity”, but “undefined” (which is to say, there is no function at that point); similarly for derivatives of functions where the functions go vertical. Since that time, I discovered that apparently physicists have supplemented the calculus I know with infinite values. They actually did it because this was useful to them. Don’t ask me why, I don’t remember. But here is a case where the pure math does not have infinities, and then the practical folk over in the physics department add them in. Apparently the practical folk think that infinity can pay rent.
As for gravitational singularities, the problem here is not the concept of infinity. That is an innocent bystander. The problem is that the math breaks down. That happens even if you replace “infinite” with “undefined”.
This isn’t really correct. Allowable values of functions are whatever you want. If you define a function on R-{0} by “x goes to 1/x”, it’s not defined at 0; I explicitly excluded it from the domain. If you define a function on R by “x goes to 1/x”… you can’t, there’s no such thing as 1⁄0. If you define a function on R by “x goes to 1/x if x is nonzero, and 0 goes to infinity”, this is a perfectly sensible function, which it is convenient to just abbreviate as “1/x”. Though for obvious reasons I would only recommend doing this if the “infinity” you are using represents both arbitrarily large positive and negative quantities. (EDIT: But if you want to define a function on [0,infty) by “x goes to 1/x if x is nonzero, and 0 goes to infinity” with “infinity” now only being large in the positive direction, which is likely what’s actually under consideration here, then this is not so dumb.)
All this is irrelevant to any actual physical questions, where whether using infinities is appropriate or not just depends on, well, the physics of it.
Yes, and of course which theory will be appropriate is going to be determined by the actual physics. My point is just that your statement that “pure math does not have infinities” and physicists “added them in” is wrong (even ignoring historical inaccuracies).
But here is a case where the pure math does not have infinities
That is not a statement that the field of mathematics does not have infinities. I was referring specifically to “the way I learned calculus”. Unless you took my class, you don’t know what I did or did not learn and how I learned it. My statement was true, your “correction” was false.
Ah, sorry then. This is the sort of mistake I that’s common enough that it seemed more obvious to me to read it the that way rather than the literal and correct way.
As for gravitational singularities, the problem here is not the concept of infinity. That is an innocent bystander. The problem is that the math breaks down.
I never really got why the math is said to ‘break down’. Is it just because of a divide by zero thing or something more significant? I guess I just don’t see a particular problem with having a part of the universe really being @%%@ed up like that.
I guess I just don’t see a particular problem with having a part of the universe really being @%%@ed up like that.
What I think is more likely is that the universe does not actually divide by zero, and the singularity is a gap in our knowledge. Gaps in knowledge are the problem of science, whose function is to fill them.
“Infinity-noncognitivist” would be more accurate in my case (but it all depends on the definition; I await one that I can see how to interpret, and I accept all the ones that I already know how to interpret [some mentioned above]).
From your post it sounds like you in fact do not have a clear picture of infinity in your head. I have a feeling this is true for many people, so let me try to paint one. Throughout this post I’ll be using “number” to mean “positive integer”.
Suppose that there is a distinction we can draw between certain types of numbers and other types of numbers. For example, we could make a distinction between “primes” and “non-primes”. A standard way to communicate the fact that we have drawn this distinction is to say that there is a “set of all primes”. This language need not be construed as meaning that all primes together can be coherently thought of as forming a collection (though it often is construed that way, usually pretty carelessly); the key thing is just that the distinction between primes and non-primes is itself meaningful. In the case of primes, the fact that the distinction is meaningful follows from the fact that there is an algorithm to decide whether any given number is prime.
Now for “infinite”: A set of numbers is called infinite if for every number N, there exists a number greater than N in the set. For example, Euclid proved that the set of primes is infinite under this definition.
Now this definition is a little restrictive in terms of mathematical practice, since we will often want to talk about sets that contain things other than numbers, but the basic idea is similar in the general case: the semantic function of a set is provided not by the fact that its members “form a collection” (whatever that might mean), but rather by the fact that there is a distinction of some kind (possibly of the kind that can be determined by an algorithm) between things that are in the set and things that are not in the set. In general a set is “infinite” if for every number N, the set contains more than N members (i.e. there are more than N things that satisfy the condition that the set encodes).
So that’s “infinity”, as used in standard mathematical practice. (Well, there’s also a notion of “infinity” in real analysis which essentially is just a placeholder symbol for “a really large number”, but when people talk about the philosophical issues behind infinity it is usually about the definition I just gave above, not the one in real analysis, which is not controversial.) Now, why is this at all controversial? Well, note that to define it, I had to talk about the notion of distinctions-in-general, as opposed to any individual distinction. But is it really coherent to talk about a notion of distinctions-in-general? Can it be made mathematically precise? This is really what the philosophical arguments are all about: what kinds of things are allowed to count as distinctions. The constructivists take the point of view that the only things that should be allowed to count as distinctions are those that can be computed by algorithms. There are some bullets to bite if you take this point of view though. For example, the twin prime conjecture states that for every number N, there exists p > N such that both p and p+2 are prime. Presumably this is either true or false, even if nobody can prove it. Moreover, presumably each number N either is or is not a counterexample to the conjecture. But then it would seem that it is possible to draw a distinction between those N which satisfy the conclusion of the conjecture, and those which are counterexamples. Yet this is false according to the constructive point of view, since there is no algorithm to determine whether any given N satisfies the conclusion of the conjecture.
I guess this is probably long enough already given that I’m replying to a five-year-old post… I could say more on this topic if people are interested.
I think my original sentence is correct; there is no known algorithm that provably outputs the answer to the question “Does N satisfy the conclusion of the conjecture?” given N as an input. To do this, an algorithm would need to do both of the following: output “Yes” if and only if N satisfies the conclusion, and output “No” if and only if N does not satisfy the conclusion. There are known algorithms that do the first but not the second (unless the twin prime conjecture happens to be true).
You’re pointing to a concept represented in your brain, using a label which you expect will evoke analogous representations of that concept in readers’ brains, and asserting that that thing is not something that a human brain could represent.
The various mathematical uses of infinity (infinite cardinals, infinity as a limit in calculus, infinities in nonstandard analysis, etc.) are all well-defined and can be stored as information-bearing concepts in human brains. I don’t think there’s any problem here.
You’re pointing to a concept represented in your brain, using a label which you expect will evoke analogous representations of that concept in readers’ brains, and asserting that that thing is not something that a human brain could represent.
It looks like we agree but you either misread or I was unclear:
I’m not asserting that the definition of infinity I mentioned at the beginning (“a number that is big enough for its smallness to be negligible for the purpose at hand”) is not something a human brain could represent. I’m saying that if the speaker considers “infinity” to be something that a human brain cannot represent, I must question what they are even doing when they utter the word. Surely they are not communicating in the sense Eliezer referred to, of trying to get someone else to have the same content in their head. (If they simply want me to note a mathematical symbol, that is fine, too.)
I also agree that various uses of concepts that could be called infinity in math can be stored in human brains, but that depends on the definitions. I am not “anti-infinity” except if the speaker claims that their infinity cannot be represented in anyone’s mind, but they are talking about it anyway. That would just be a kind of “bluffing,” as it were. If there are sensical definitions of infinity that seem categorically different than the ones I mentioned so far, I’d like to see them.
In short, I just don’t get infinity unless it means one of the things I’ve said so far. I don’t want to be called a “finitist” if I don’t even know what the person means by “infinite.”
PS: that whole blog is pretty awesome—I got turned on to it by the post “Seemingly impossible functional programs” which demonstrates e.g. how the problem of determining equality of two black-box functions from reals in [0, 1] to booleans turns out to be computationally decidable in finite time
Oi, that’s not right. The domain of these functions is not the set of reals in [0, 1] but the set of infinite sequences of bits; while there is a bijection between these two sets, it’s not the obvious one of binary expansion, because in binary, 0.0111… and 0.1000… represent the same real number. There is no topology-preserving bijection between the two sets. Also, the functions have to be continuous; it’s easy to come up with a function (e.g. equality to a certain sequence) for which the given functions don’t work.
Of course, it happens that the usual way of handing “real numbers” in languages like Haskell actually handles things that are effectively the same as bit sequences, and that there’s no way to write a total non-continuous function in a language like Haskell, making my point somewhat moot. So, carry on, then.
Your comment is basically correct. This paper deals with the representation issue somewhat. But I think those results are applicable to computation in general, and the choice of Haskell is irrelevant to the discussion. You’re welcome to prove me wrong by exhibiting a representation of exact reals that allows decidable equality, in any programming language.
That was my (charitable) interpretation too, until, to my dismay, Eliezer confirmed (at a meetup) that he had “leanings” in the direction of constructivism/intuitionism—apparently not quite aware of the discredited status of such views in mathematics.
And indeed, when I asked Eliezer where he thinks the standard proof of infinite sets goes wrong, he pointed to the law of the excluded middle.
His idol E.T. Jaynes may be to blame, who in PTLS explicitly allied himself with Kronecker, Brouwer, and Poincaré as opposed to Cantor, Hilbert, and Bourbaki—once again apparently not understanding the settled status of that debate on the side of Cantor et al. One is inclined to suspect this is where Eliezer picked such attitudes up.
Can you elaborate on constructivism, intuitionism, and their discrediting? And what that has to do with the law of the excluded middle? I thought constructivism and intuitionism were epistemological theories, and it isn’t immediately obvious how they apply to mathematics. Does a constructivist mathematician not believe in proof by contradiction?
Also, I don’t know what you mean by “the standard proof of infinite sets”.
I think komponisto is a little confused about the discredited status of intuitionism, and you’re a little confused about math vs epistemology. Here’s a short sweet introduction to intuitionist math and when it’s useful, much in the spirit of Eliezer’s intuitive explanation of Bayes. Scroll down for the connection between intuitionism and infinitesimals—that’s the most exciting bit.
PS: that whole blog is pretty awesome—I got turned on to it by the post “Seemingly impossible functional programs” which demonstrates e.g. how the problem of determining equality of two black-box functions from reals in [0, 1] to booleans turns out to be computationally decidable in finite time (complete with comparison algorithm in Haskell).
Not at all. Precious few are the mathematicians who take the views of Kronecker or Brouwer seriously today. I mean, sure, some historically knowledgeable mathematicians will gladly engage in bull sessions about the traditional “three views” in the philosophy of mathematics (Platonism, intuitionism, and formalism), during which they treat them as if on par with each other. But then they get up the next day and write papers that depend on the Axiom of Choice without batting an eye.
The philosophical parts of intuitionism are mostly useless, but it contains useful mathematical parts like Martin-Löf type theory used in e.g. the Coq proof assistant. Not sure if this is relevant to Eliezer’s “leanings” which started the discussion, but still.
Right, but in this context I wouldn’t label such “mathematical parts” as part of intuitionism per se. What I’m talking about here is a certain school of thought that holds that mainstream (infinitary, nonconstructive) mathematics is in some important sense erroneous. This is a belief that Eliezer has been hitherto unwilling to disclaim—for no reason that I can fathom other than a sense of warm glow around E.T. Jaynes.
(Needless to say, Eliezer is welcome to set the record straight on this any time he wishes...)
I do not understand what the word “erroneous” is supposed to mean in this context.
For the sake of argument, I will go ahead and ask what sort of nonconstructive entities you think an AI needs to reason about, in order to function properly.
Some senses of “erroneous” that might be involved here include (this list is not necessarily intended to be exhaustive):
Mathematically incorrect—i.e. the proofs contain actual logical inconsistencies. This was argued by some early skeptics (such as Kronecker) but is basically indefensible ever since the formulation of axiomatic set theory and results such as Gödel’s on the consistency of the Axiom of Choice. Such a person would have to actually believe the ZF axioms are inconsistent, and I am aware of no plausible argument for this.
Making claims that are epistemologically indefensible, even if possibly true. E.g., maybe there does exist a well-ordering of the reals, but mere mortals are in no position to assert that such a thing exists. Again, axiomatic formalization should have meant the end of this as a plausible stance.
Irrelevant or uninteresing as an area of research because of a “lack of correspondence” with “reality” or “the physical world”. In order to be consistent, a person subscribing to this view would have to repudiate the whole of pure mathematics as an enterprise. If, as is more common, the person is selectively criticizing certain parts of mathematics, then they are almost certainly suffering from map-territory confusion. Mathematics is not physics; the map is not the territory. It is not ordained or programmed into the universe that positive integers must refer specifically to numbers of elementary particles, or some such, any more than the symbolic conventions of your atlas are programmed into the Earth. Hence one cannot make a leap e.g. from the existence of a finite number of elementary particles to the theoretical adequacy of finitely many numbers. To do so would be to prematurely circumscribe the nature of mathematical models of the physical world. Any criticism of a particular area of mathematics as “unconnected to reality” necessarily has to be made from the standpoint of a particular model of reality. But part (perhaps a large part) of the point of doing pure mathematics (besides the fact that it’s fun, of course), is to prepare for the necessity, encountered time and time again in the history of our species, of upgrading—and thus changing—our very model. Not just the model itself but the ways in which mathematical ideas are used in the model. This has often happened in ways that (at least at the time) would have seemed very surprising.
Well, if the AI is doing mathematics, then it needs to reason about the very same entities that human mathematicians reason about.
Maybe that sounds like begging the question, because you could ask why humans themselves need to reason about those entities (which is kind of the whole point here). But in that case I’m not sure what you’re getting at by switching from humans to AIs.
Do you perhaps mean to ask something like: “What kind of mathematical entities will be needed in order to formulate the most fundamental physical laws?”
Why do you think that the axiomatic formulation of ZFC “should have meant an end” to the stance that ZFC makes claims that are epistemologically indefensible? Just because I can formalize a statement does not make that statement true, even if it is consistent. Many people (including me and apparently Eliezer, though I would guess that my views are different from his) do not think that the axioms of ZFC are self-evident truths.
In general, I find the argument for Platonism/the validity of ZFC based on common acceptance to be problematic because I just don’t think that most people think about these issues seriously. It is a consensus of convenience and inertia. Also, many mathematicians are not Platonists at all but rather formalists—and constructivism is closer to formalism than Platonism is.
Regarding your three bullet points above:
It’s rude to start refuting an idea before you’ve finished defining it.
One of these things is not like the others. There’s nothing wrong with giving us a history of constructive thinking, and providing us with reasons why outdated versions of the theory were found wanting. It’s good style to use parallel construction to build rhetorical momentum. It is terribly dishonest to do both at the same time—it creates the impression that the subjective reasons you give for dismissing point 3 have weight equal to the objective reasons history has given for dismissing points 1 and 2.
Your talk in point 3 about “map-territory confusion” is very strange. Mathematics is all in your head. It’s all map, no territory. You seem to be claiming that constructivsts are outside of the mathematical mainstream because they want to bend theory in the direction of a preferred outcome. You then claim that this is outside of the bounds of acceptable mathematical thinking, So what’s wrong with reasoning like this:
“Nobody really likes all of the consequences of the Axiom of Choice, but most people seem willing to put up with its bad behavior because some of the abstractions it enables—like the Real Numbers—are just so damn useful. I wonder how many of the useful properties of the Real Numbers I could capture by building up from (a possibly weakened version of) ZF set theory and a weakened version of the Axiom of Choice?”
I’m sorry, but I don’t think there was anything remotely “rude” or “terribly dishonest” about my previous comment. If you think I am mistaken about anything I said, just explain why. Criticizing my rhetorical style and accusing me of violating social norms is not something I find helpful.
Quite frankly, I also find criticisms of the form “you sound more confident than you should be” rather annoying. E.g:
That’s because for me, the reasons I gave in point 3 do indeed have similar weight to the reasons I gave in points 1 and 2. If you disagree, by all means say so. But to rise up in indignation over the very listing of my reasons—is that really necessary? Would you seriously have preferred that I just list the bullet points without explaining what I thought?
Nothing at all, except for the false claim that nobody likes the consequences of the Axiom of Choice. (Some people do like them, and why shouldn’t they?)
The target of my critique—and I thought I made this clear in my response to cousin_it—is the critique of mainstream mathematical reasoning, not the research program of exploring different axiomatic set theories. The latter could easily be done by someone fully on board with the soundness of traditional mathematics. Just as it is unnecessary to doubt the correctness of Euclid’s arguments in order to be interested in non-Euclidean geometry.
Until very recently, I held a similar attitude. I think it’s common to be annoyed by this sort of criticism… it’s distracting and rarely relevant.
That said, it seems to me that the above “rarely” isn’t rare enough. If you’re inadvertently violating a social norm, wouldn’t you like to know? If you already know, what does it matter to have it pointed out to you? Just ignore the redundant information.
I think this principle extends to a lot of speculative or subjective criticism. The potential value of just one accurate critique taken to heart seems quite high. Does such criticism have a positive expected value? That depends on the overall cost of the associated inaccurate or redundant statements (i.e., the vast majority of them). It seems this cost can be made to approach zero by just not taking them personally and ignoring them when they’re misguided, so long as they’re sufficiently disentangled from “object-level” statements.
Aaaand this makes me curious. Eliezer, for the sake of argument, do you really think we’d do good by prohibiting the AI from using reductio ad absurdum?
Nope. I do believe in classical first-order logic, I’m just skeptical about infinite sets. I’d like to hear k’s answer, though.
Perhaps this would make a good subject for my inaugural top-level post. I’ll try to write one up in the near future.
Okay. I have several sources of skepticism about infinite sets. One has to do with my never having observed a large cardinal. One has to do with the inability of first-order logic to discriminate different sizes of infinite set (any countably infinite set of first-order statements that has an infinite model has a countably infinite model—i.e. a first-order theory of e.g. the real numbers has countable models as well as the canonical uncountable model) and that higher-order logic proves exactly what a many-sorted first-order logic proves, no more and no less. One has to do with the breakdown of many finite operations, such as size comparison, in a way that e.g. prevents me from comparing two “infinite” collections of observers to determine anthropic probabilities.
The chief argument against my skepticism has to do with the apparent physical existence of minimal closures and continuous quantities, two things that cannot be defined in first-order logic but that would, apparently, if you take higher-order logic at face value, suffice respectively to specify the existence of a unique infinite collection of natural numbers and a unique infinite collection of points on a line.
Another point against my skepticism is that first-order set theory proper and not just first-order Peano Arithmetic is useful to prove e.g. the totalness of the Goodstein function, but while a convenient proof uses infinite ordinals, it’s not clear that you couldn’t build an AI that got by just as well on computable functions without having to think about infinite sets.
My position can be summed up as follows: I suspect that an AI does not have to reason about large infinities, or possibly any infinities at all, in order to deal with reality.
One has to do with the breakdown of many finite operations, such as size comparison, in a way that e.g. prevents me from comparing two “infinite” collections of observers to determine anthropic probabilities.
Born probabilities seem to fit your bill perfectly. :-)
Don’t think I haven’t noticed that. (In fact I believe I wrote about it.)
I reject infinity as anything more than “a number that is big enough for its smallness to be negligible for the purpose at hand.”
My reason for rejecting infinity in it’s usual sense is very simple: it doesn’t communicate anything. Here you said (about communication) “When you each understand what is in the other’s mind, you are done.” In order to communicate, there has to be something in your mind in the first place, but don’t we all agree infinity can’t ever be in your mind? If so, how can it be communicated?
Edit to clarify: I worded that poorly. What I mean to ask is, Don’t we all agree that we cannot imagine infinity (other than imagine something like, say, a video that seems to never end, or a line that is way longer than you’d ever seem to need)? If you can imagine it, please just tell me how you do it!
Also, “reject” is too strong a word; I merely await a coherent definition of “infinity” that differs from mine.
Yes but it doesn’t matter. The moon can’t literally be in your mind either. Since your mind is in your brain, then if the moon were in your mind it would be in your brain, and I don’t even know what would happen first: your brain would be crushed against your skull (which would in turn explode), and the weight of the moon would crush you flat (and also destroy whatever continent you were on and then very possibly the whole world).
But you can still think about the moon without it literally having to be in your mind.
Same with infinity.
I can visualize the moon. If I say the word “moon,” and you get a picture of the moon in your mind—or some such thing—then I feel like we’re on the same page. But I can’t visualize “infinity,” or when I do it turns out as above. If I say the word “infinity” and you visualize (or taste, or whatever) something similar, I feel like we’ve communicated, but then you would agree with my first line in the above post. Since you don’t agree, when I say “infinity,” you must get some very different representation in your mind. Does it do the concept any more justice that my representations? If so, please tell me how to experience it.
We refer to things with signs. The signs don’t have to be visual representations. We can think about things by employing the signs which refer to them. What makes the sign for (say) countable infinity refer to it is the way that the sign is used in a mathematical theory (countable infinity being a mathematical concept). Learn the math, and you will learn the concept.
Compare to this: you probably cannot visualize the number 845,264,087,843,113. You can of course visualize the sign I just now wrote for it, but you cannot visualize the number itself (by, for example, visualizing a large bowl with exactly that number of pebbles in it). What you can do is visualize a bowl with a vast number of pebbles in it, while thinking the thought, “this imagined bowl has precisely 845,264,087,843,113 pebbles in it.” Here you would be relying entirely on the sign to make your mental picture into a picture of exactly that number of pebbles. In fact you could dispense with the picture entirely and keep only the sign, and you would successfully be thinking about that number, purely by employing its sign in your thoughts. Note that you can do operations on that sign, such as subtracting another number by manipulating the two signs via the method you learned as a child. So you have mastered (some of) the relevant math, so the sign, as you employ it, really does refer to that number.
Well I agree that I can think just with verbal signs, so long as the verbal sentences or symbolic statements mean something to me (could potentially pay rent*) or the symbols are eventually converted into some other representation that means something to me.
I can think with the infinity symbol, which doesn’t mean anything to me (unless it means what I first said above: in short, “way big enough”), and then later convert the result back into symbols that do mean something to me. So I’m fine with using infinity in math, as long as it’s just a formalism (a symbol) like that.
But here is one reason why I want to object to the “realist” interpretation of infinity via this argument that it’s just a formalism and has no physical or experiential interpretation, besides “way big enough”: The Christian god, for example, is supposed to be infinite this and infinite that. This isn’t intended—AFAIK—as a formalism nor as an approximation (“way powerful enough”), but as an actual statement. Once you realize this really isn’t communicating anything, theological noncognitivism is a snap: the entity in question is shown to be a mere symbol, if anything. (Or, to be completely fair, God could just be a really powerful, really smart dude.) I know there are other major problems with theology, but this approach seems cleanest.
*ETA: This needs an example. Say I have a verbal belief or get trusted verbal data, like a close friend says in a serious and urgent voice, “(You’d better) duck!” The sentence means something to me directly: it means I’ll be better off taking a certain action. That pays rent because I don’t get hit in the head by a snowball or something. To make it into thinking in words (just transforming sentences around using my knowledge of English grammar), my friend might have been a prankster and told me something of the form, “If not A, then not B. If C, then B. If A, then you’d better duck. By the way, C.” Then I’d have to do the semantic transforms to derive the conclusion: “(I’d better) duck!” which means something to me.
To know reality we employ physics. Physics employs calculus. Calculus employs limits. Limits employ infinite sequences. Does that pay enough rent?
I did say I’m fine with using infinity in math as a formalism, and also that statements using it could be reconverted (using mathematical operations) into ones that do pay rent. It’s just that the symbol infinity doesn’t immediately mean anything to me (except my original definition).
But I am interested in the separate idea that limits employ infinite sequences. It of course depends on the definition of limit. The epsilon-delta definition in my highschool textbook didn’t use infinite sequences, except in the sense of “you could go on giving me epsilons and I could go on giving you deltas.” That definition of infinity (if we’ll call it that) directly means something to me: “this process of back and forth is not going to end.” There is also the infinitesimal approach of nonstandard analysis, but see my reply to ata for that.
If statement A can be converted into statement B and statement B pays rent, then statement A pays rent.
Your original definition:
Is a terrible one for most purposes, because for them, no matter how big you make a finite number, it won’t serve the purpose.
Also, meaning is not immediate. Your sense that a word means something may arise with no perceptible delay, but meaning takes time. To use the point you raised, meaning pays rent and rent takes time to pay. Anticipated sensory experiences are scheduled occur in the future, i.e. after a delay. The immediate sense that a word means something is not, itself, the meaning, but only a reliable intuition that the word means something. If you study the mathematics of infinity, then you will likewise develop the intuition that infinity means something.
The epsilon delta definition is meaningful because of the infinite divisibility of the reals.
Unlike your original definition, this is a good definition (at least, once it’s been appropriately cleaned up and made precise).
Only if the mathematical operation is performed by pure logically entailment, which—if a meaningless definition of infinity is used and that definition is scrapped in the final statement—it would not be. We will just go on about what constitutes a mathematical operation and such, but all I am saying is that if there is a formal manipulation rule that says something like, “You can change the infinity symbol to ‘big enough’* here” (note: this is not logical entailment) then I have no objection to the use of the formal symbol “infinity.”
*ETA: or just use the definition we agree on instead. This is a minor technical point, hard to explain, and I’m not doing a good job of it. I’ll leave it in just in case you started a reply to it already, but I don’t think it will help many people understand what I’m talking about, rather than just reading the parts below this.
For example? Although, if we agree on the definition below, there’s maybe no point.
That’s why I said “could potentially pay rent.”
Looks like we’re in agreement, then, and I am not a finitist if that is what is meant by infinite sequences.
But then, to take it back to the original, I still agree with Eliezer that an “infinite set” is a dubious concept. Infinite as an adverb I can take (describes a process that isn’t going to end (in the sense that expecting it to end never pays rent)); infinite as an adjective, and infinity the noun, seem like reification: Harmless in some contexts, but harmful in others.
A very early appearance of infinity is the proof that there are infinitely many primes. It is most certainly not a proof that there is a very large but finite number of primes.
I can agree with “there are infinitely many primes” if I interpret it as something like “if I ever expect to run out of primes, that belief won’t pay rent.”
In this case, and in most cases in mathematics, these statements may look and operate the same—except mine might be slower and harder to work with. So why do I insist on it? I’m happy to work with infinities for regular math stuff, but there are some cases where it does matter, and these might all be outside of pure math. But in applied math there can be problems if infinity is taken seriously as a static concept rather than as a process where the expectation that it will end will never pay rent.
Like if someone said, “Black holes have infinite density,” I would have to ask for clarification. Can it be put into a verbal form at least? How would it pay rent in terms of measurements? That kind of thing.
Actually, the way I learned calculus, allowable values of functions are real (or complex), not infinite. The value of the function 1/x at x=0 is not “infinity”, but “undefined” (which is to say, there is no function at that point); similarly for derivatives of functions where the functions go vertical. Since that time, I discovered that apparently physicists have supplemented the calculus I know with infinite values. They actually did it because this was useful to them. Don’t ask me why, I don’t remember. But here is a case where the pure math does not have infinities, and then the practical folk over in the physics department add them in. Apparently the practical folk think that infinity can pay rent.
As for gravitational singularities, the problem here is not the concept of infinity. That is an innocent bystander. The problem is that the math breaks down. That happens even if you replace “infinite” with “undefined”.
This isn’t really correct. Allowable values of functions are whatever you want. If you define a function on R-{0} by “x goes to 1/x”, it’s not defined at 0; I explicitly excluded it from the domain. If you define a function on R by “x goes to 1/x”… you can’t, there’s no such thing as 1⁄0. If you define a function on R by “x goes to 1/x if x is nonzero, and 0 goes to infinity”, this is a perfectly sensible function, which it is convenient to just abbreviate as “1/x”. Though for obvious reasons I would only recommend doing this if the “infinity” you are using represents both arbitrarily large positive and negative quantities. (EDIT: But if you want to define a function on [0,infty) by “x goes to 1/x if x is nonzero, and 0 goes to infinity” with “infinity” now only being large in the positive direction, which is likely what’s actually under consideration here, then this is not so dumb.)
All this is irrelevant to any actual physical questions, where whether using infinities is appropriate or not just depends on, well, the physics of it.
They are limited by the scope of whatever theory you are working in.
Yes, and of course which theory will be appropriate is going to be determined by the actual physics. My point is just that your statement that “pure math does not have infinities” and physicists “added them in” is wrong (even ignoring historical inaccuracies).
Selective quotation. I said:
That is not a statement that the field of mathematics does not have infinities. I was referring specifically to “the way I learned calculus”. Unless you took my class, you don’t know what I did or did not learn and how I learned it. My statement was true, your “correction” was false.
Ah, sorry then. This is the sort of mistake I that’s common enough that it seemed more obvious to me to read it the that way rather than the literal and correct way.
I might call engineers “practical folk”; astrophysicists I’m not so sure. I’d like to see their reason for doing so.
I never really got why the math is said to ‘break down’. Is it just because of a divide by zero thing or something more significant? I guess I just don’t see a particular problem with having a part of the universe really being @%%@ed up like that.
What I think is more likely is that the universe does not actually divide by zero, and the singularity is a gap in our knowledge. Gaps in knowledge are the problem of science, whose function is to fill them.
I’m really surprised at the amount of anti-infinitism that rolls around Less Wrong.
“Infinity-noncognitivist” would be more accurate in my case (but it all depends on the definition; I await one that I can see how to interpret, and I accept all the ones that I already know how to interpret [some mentioned above]).
It’s not just you. There was just recently another thread going on about how the real numbers ought to be countable and what-not.
From your post it sounds like you in fact do not have a clear picture of infinity in your head. I have a feeling this is true for many people, so let me try to paint one. Throughout this post I’ll be using “number” to mean “positive integer”.
Suppose that there is a distinction we can draw between certain types of numbers and other types of numbers. For example, we could make a distinction between “primes” and “non-primes”. A standard way to communicate the fact that we have drawn this distinction is to say that there is a “set of all primes”. This language need not be construed as meaning that all primes together can be coherently thought of as forming a collection (though it often is construed that way, usually pretty carelessly); the key thing is just that the distinction between primes and non-primes is itself meaningful. In the case of primes, the fact that the distinction is meaningful follows from the fact that there is an algorithm to decide whether any given number is prime.
Now for “infinite”: A set of numbers is called infinite if for every number N, there exists a number greater than N in the set. For example, Euclid proved that the set of primes is infinite under this definition.
Now this definition is a little restrictive in terms of mathematical practice, since we will often want to talk about sets that contain things other than numbers, but the basic idea is similar in the general case: the semantic function of a set is provided not by the fact that its members “form a collection” (whatever that might mean), but rather by the fact that there is a distinction of some kind (possibly of the kind that can be determined by an algorithm) between things that are in the set and things that are not in the set. In general a set is “infinite” if for every number N, the set contains more than N members (i.e. there are more than N things that satisfy the condition that the set encodes).
So that’s “infinity”, as used in standard mathematical practice. (Well, there’s also a notion of “infinity” in real analysis which essentially is just a placeholder symbol for “a really large number”, but when people talk about the philosophical issues behind infinity it is usually about the definition I just gave above, not the one in real analysis, which is not controversial.) Now, why is this at all controversial? Well, note that to define it, I had to talk about the notion of distinctions-in-general, as opposed to any individual distinction. But is it really coherent to talk about a notion of distinctions-in-general? Can it be made mathematically precise? This is really what the philosophical arguments are all about: what kinds of things are allowed to count as distinctions. The constructivists take the point of view that the only things that should be allowed to count as distinctions are those that can be computed by algorithms. There are some bullets to bite if you take this point of view though. For example, the twin prime conjecture states that for every number N, there exists p > N such that both p and p+2 are prime. Presumably this is either true or false, even if nobody can prove it. Moreover, presumably each number N either is or is not a counterexample to the conjecture. But then it would seem that it is possible to draw a distinction between those N which satisfy the conclusion of the conjecture, and those which are counterexamples. Yet this is false according to the constructive point of view, since there is no algorithm to determine whether any given N satisfies the conclusion of the conjecture.
I guess this is probably long enough already given that I’m replying to a five-year-old post… I could say more on this topic if people are interested.
I think you mean, ‘determine that it does not satisfy the conclusion’.
I think my original sentence is correct; there is no known algorithm that provably outputs the answer to the question “Does N satisfy the conclusion of the conjecture?” given N as an input. To do this, an algorithm would need to do both of the following: output “Yes” if and only if N satisfies the conclusion, and output “No” if and only if N does not satisfy the conclusion. There are known algorithms that do the first but not the second (unless the twin prime conjecture happens to be true).
You’re pointing to a concept represented in your brain, using a label which you expect will evoke analogous representations of that concept in readers’ brains, and asserting that that thing is not something that a human brain could represent.
The various mathematical uses of infinity (infinite cardinals, infinity as a limit in calculus, infinities in nonstandard analysis, etc.) are all well-defined and can be stored as information-bearing concepts in human brains. I don’t think there’s any problem here.
It looks like we agree but you either misread or I was unclear:
I’m not asserting that the definition of infinity I mentioned at the beginning (“a number that is big enough for its smallness to be negligible for the purpose at hand”) is not something a human brain could represent. I’m saying that if the speaker considers “infinity” to be something that a human brain cannot represent, I must question what they are even doing when they utter the word. Surely they are not communicating in the sense Eliezer referred to, of trying to get someone else to have the same content in their head. (If they simply want me to note a mathematical symbol, that is fine, too.)
I also agree that various uses of concepts that could be called infinity in math can be stored in human brains, but that depends on the definitions. I am not “anti-infinity” except if the speaker claims that their infinity cannot be represented in anyone’s mind, but they are talking about it anyway. That would just be a kind of “bluffing,” as it were. If there are sensical definitions of infinity that seem categorically different than the ones I mentioned so far, I’d like to see them.
In short, I just don’t get infinity unless it means one of the things I’ve said so far. I don’t want to be called a “finitist” if I don’t even know what the person means by “infinite.”
Oi, that’s not right. The domain of these functions is not the set of reals in [0, 1] but the set of infinite sequences of bits; while there is a bijection between these two sets, it’s not the obvious one of binary expansion, because in binary, 0.0111… and 0.1000… represent the same real number. There is no topology-preserving bijection between the two sets. Also, the functions have to be continuous; it’s easy to come up with a function (e.g. equality to a certain sequence) for which the given functions don’t work.
Of course, it happens that the usual way of handing “real numbers” in languages like Haskell actually handles things that are effectively the same as bit sequences, and that there’s no way to write a total non-continuous function in a language like Haskell, making my point somewhat moot. So, carry on, then.
Your comment is basically correct. This paper deals with the representation issue somewhat. But I think those results are applicable to computation in general, and the choice of Haskell is irrelevant to the discussion. You’re welcome to prove me wrong by exhibiting a representation of exact reals that allows decidable equality, in any programming language.
Yes, a constructivist mathematician) does not believe in proof by contradiction.
Huh. Good to know.