mathematicians from Pythagoras through Dedekind had absolutely no problem talking about numbers
in the absence of the Peano axioms.
So, they were lucky. It could have been that that-which-Pythagoras-calls-number was not that-which-Fibonacci-calls-numbers.
Are there absolutely no examples of cases where mathematicians disagreed about some theorem about some not-yet-axiomatized subject, and then it turns out the disagreement was because they were actually talking about different things?
(I know of no such examples, but I would be surprised it none exist)
Are there absolutely no examples of cases where mathematicians disagreed about some theorem about some not-yet-axiomatized subject, and then it turns out the disagreement was because they were actually talking about different things?
There is such an example—rather more complicated than you’re describing, but the same sort of thing: Euler’s theorem about polyhedra, before geometry was formalised. This is the theorem that F-E+V = 2, where F, E, and V are the numbers of faces, edges, and vertices of a polyhedron. What is a polyhedron?
Lakatos’s book “Proofs and Refutations” consists of a history of this problem in which various “odd” polyhedra were invented and the definition of a polyhedron correspondingly refined, until reaching the present understanding of the theorem.
I don’t know if this is true, but I once had a lecturer tell me that there used to be considerable debate over the question of whether a derivative of a differentiable function was necessarily continuous, which ultimately boiled down to the two sides having different definitions of continuity, but not realising it since neither had ever fully set down their axioms and definitions.
I’ve heard that before also but haven’t seen a source for it. But keep in mind that that’s a question about functions on the real numbers, not about what is being called “numbers” here which seems to be a substitute for the integers or the natural numbers.
Hmm, in that case, it might be relevant to point out examples that don’t quite fit Plasmon’s situation but are almost the same: There are a variety of examples where due to a lack of rigorous axiomatization, statements were believed to be true that just turned out to be false. One classical example is the idea that of a function continuous on an interval and nowhere differentiable. Everyone took for granted that for granted that a continuous function could only fail differentiability at isolated points until Weierstrass showed otherwise.
For another one, Russell’s paradox seems like it was a consequence naively assuming our intuitions about what counts as a ‘set’ would necessarily be correct, or even internally consistent.
Why do you imagine that the introduction of an axiomatic system would address this problem?
Because then the problem is not “Does this non-axiomatized stuff obey that theorem ?” but “Does that theorem follow from these axioms ?”. One is a pure logic problem, and proofs may be checked by automated proof-checkers. The other directly or indirectly relies on the mathematician’s intuition of the non-axiomatized subject in question, and can’t be checked by automated proof-checkers.
Because then the problem is not “Does this non-axiomatized stuff obey that theorem ?” but “Does that theorem follow from these axioms ?”. One is a pure logic problem, and proofs may be checked by automated proof-checkers. The other directly or indirectly relies on the mathematician’s intuition of the non-axiomatized subject in question, and can’t be checked by automated proof-checkers.
Except insofar as the mathematicians, unknown to each other, have different ideas of what constitutes a valid rule of inference.
A logical system is a mathematical obect. If the problem is that different mathematicians might think they’re talking about the same object when they’re really talking about different ones, then I don’t see why logical systems, out of all mathematical objects, should be particularly exempt from this problem.
Also, of course, to put this in the context of the original post, if we’re talking about second order logic, then of course there can be no automated proof-checkers.
Also, of course, to put this in the context of the original post, if we’re talking about second order logic, then of course there can be no automated proof-checkers.
You appear to be confused here. The rest of your post is good.
I should have said this more carefully. If one allows enough rules of inference so that all the logical consequences of the axioms can be proved, then there are no automated proof checkers. So you can have proof checkers, but only at the cost of restricting the system so that not all logical consequences (i.e. implications that are true in every model) can be proved.
I think there’s a definitional issue here. Eugine is using “proof checker” to mean an algorithm that given a sequence of statements in an axiomatic system verifies that the proof is formally valid. You seem to mean by proof checker something like a process that goes through listing all valid statements in the system along with a proof of the statement.
JoshuaZ: No, I mean the former. The problem is that you have enough rules of inference to allow you to extract all logical consequences of your axioms, then that set of rules of inference is going to be too complicated to explain to any computer. (i.e. the rules of inference are non-recursive.)
Do you mean that there can be no automated proof-checkers which are sound and complete (and terminating)? Surely there can be useful sound checkers (which terminate)? The existence of Coq and other Dependently typed languages suggests that’s true.
Specified here—I think it is canonically produced by Markdown.pl, downloadable there.
Notably, Markdown allows inline html. Most implementations that work with client data, like that of Reddit/Lw, only allow a limited subset of Markdown functionality. (Most also add functionality).
All universal programming languages (assembler, C, CLIP, Lisp, Cobol, Python, Java) can parse perl as well.
Only if they implement Perl, perfectly mimicking the functionality of perl (the only spec for Perl). Amongst other difficulties, Perl has available the full power of Perl at the preprocessing stage.
But then these “most implementations” are not implementations of “standard Markdown,”
Note the qualifier—“most implementations that work with client data”. Markdown is also used extensively to generate static content that is not user-generated.
There are still multiple implementations used in generating static content, no two of which really do the same thing, e.g., pandoc, multimarkdown, and etc. These are all still arguably “markdown” (at least, they call themselves that) but don’t conform to standard markdown as you understand it.
Consider also the parallel ‘postulate’, reluctantly introduced as an axiom in Euclid. People tried to prove it as a theorem for two thousand years, until it was realized that its negation defined entirely different kinds of geometry.
A not-quite-but-close fit: the distinction between prime and irreducible elements of a number field became necessary because unique factorization into primes failed in simple number fields.
Various theorems, lemmas, and other principles equivalent to the Axiom of Choice (e.g. Zorn’s lemma) were argued over until it was established (by Kurt Gödel and Paul Cohen) that the AoC is entirely independent of the ZF axioms, i.e. ZFC and ZF!C are both consistent systems. I think this is the canonical example.
“The Axiom of Choice is obviously true, the well-ordering principle obviously false, and who can tell about Zorn’s lemma?” — Jerry Bona
So, they were lucky. It could have been that that-which-Pythagoras-calls-number was not that-which-Fibonacci-calls-numbers.
Are there absolutely no examples of cases where mathematicians disagreed about some theorem about some not-yet-axiomatized subject, and then it turns out the disagreement was because they were actually talking about different things?
(I know of no such examples, but I would be surprised it none exist)
There is such an example—rather more complicated than you’re describing, but the same sort of thing: Euler’s theorem about polyhedra, before geometry was formalised. This is the theorem that F-E+V = 2, where F, E, and V are the numbers of faces, edges, and vertices of a polyhedron. What is a polyhedron?
Lakatos’s book “Proofs and Refutations” consists of a history of this problem in which various “odd” polyhedra were invented and the definition of a polyhedron correspondingly refined, until reaching the present understanding of the theorem.
Upvoted for “Proofs and Refutations” reference
I don’t know if this is true, but I once had a lecturer tell me that there used to be considerable debate over the question of whether a derivative of a differentiable function was necessarily continuous, which ultimately boiled down to the two sides having different definitions of continuity, but not realising it since neither had ever fully set down their axioms and definitions.
I’ve heard that before also but haven’t seen a source for it. But keep in mind that that’s a question about functions on the real numbers, not about what is being called “numbers” here which seems to be a substitute for the integers or the natural numbers.
I thought he was asking if it had ever happened in any not-yet-axiomatised subject, presumably looking for examples other than arithmetic.
Yes. I think the mathematicians were lucky that it didn’t happen on the sort of integers they were discussing (there was, after all, great discussion about irrational numbers, zero , later imaginary numbers, and even Archimedes’ attempt to describe big integers was probably not without controversy ).
Hmm, in that case, it might be relevant to point out examples that don’t quite fit Plasmon’s situation but are almost the same: There are a variety of examples where due to a lack of rigorous axiomatization, statements were believed to be true that just turned out to be false. One classical example is the idea that of a function continuous on an interval and nowhere differentiable. Everyone took for granted that for granted that a continuous function could only fail differentiability at isolated points until Weierstrass showed otherwise.
For another one, Russell’s paradox seems like it was a consequence naively assuming our intuitions about what counts as a ‘set’ would necessarily be correct, or even internally consistent.
Plasmon:
So, they were lucky. It could have been that that-which-Pythagoras-calls-number was not that-which-Fibonacci-calls-numbers.
Why do you imagine that the introduction of an axiomatic system would address this problem?
To quote, just put a greater-than sign
>
at the beginning of the first line.Because then the problem is not “Does this non-axiomatized stuff obey that theorem ?” but “Does that theorem follow from these axioms ?”. One is a pure logic problem, and proofs may be checked by automated proof-checkers. The other directly or indirectly relies on the mathematician’s intuition of the non-axiomatized subject in question, and can’t be checked by automated proof-checkers.
Except insofar as the mathematicians, unknown to each other, have different ideas of what constitutes a valid rule of inference.
A logical system is a mathematical obect. If the problem is that different mathematicians might think they’re talking about the same object when they’re really talking about different ones, then I don’t see why logical systems, out of all mathematical objects, should be particularly exempt from this problem.
Also, of course, to put this in the context of the original post, if we’re talking about second order logic, then of course there can be no automated proof-checkers.
You appear to be confused here. The rest of your post is good.
I should have said this more carefully. If one allows enough rules of inference so that all the logical consequences of the axioms can be proved, then there are no automated proof checkers. So you can have proof checkers, but only at the cost of restricting the system so that not all logical consequences (i.e. implications that are true in every model) can be proved.
I think there’s a definitional issue here. Eugine is using “proof checker” to mean an algorithm that given a sequence of statements in an axiomatic system verifies that the proof is formally valid. You seem to mean by proof checker something like a process that goes through listing all valid statements in the system along with a proof of the statement.
JoshuaZ: No, I mean the former. The problem is that you have enough rules of inference to allow you to extract all logical consequences of your axioms, then that set of rules of inference is going to be too complicated to explain to any computer. (i.e. the rules of inference are non-recursive.)
Isn’t that more a consequence of the stronger statement that you just can’t write down all valid inferences in the second-order system?
Do you mean that there can be no automated proof-checkers which are sound and complete (and terminating)? Surely there can be useful sound checkers (which terminate)? The existence of Coq and other Dependently typed languages suggests that’s true.
LessWrong uses Markdown for comment formatting. Click the button labelled “Show help” under the right-hand side of the comment box.
Well, that would’ve worked fine in standard Markdown.
What is “standard Markdown”?
Specified here—I think it is canonically produced by
Markdown.pl
, downloadable there.Notably, Markdown allows inline html. Most implementations that work with client data, like that of Reddit/Lw, only allow a limited subset of Markdown functionality. (Most also add functionality).
Oh, I see—a specification in the style of “only perl can parse perl.”
But then these “most implementations” are not implementations of “standard Markdown,” hence my confusion.
All universal programming languages (assembler, C, CLIP, Lisp, Cobol, Python, Java) can parse perl as well.
Only if they implement Perl, perfectly mimicking the functionality of
perl
(the only spec for Perl). Amongst other difficulties, Perl has available the full power of Perl at the preprocessing stage.That doesn’t matter, kind of like non-paperclips.
Note the qualifier—“most implementations that work with client data”. Markdown is also used extensively to generate static content that is not user-generated.
There are still multiple implementations used in generating static content, no two of which really do the same thing, e.g., pandoc, multimarkdown, and etc. These are all still arguably “markdown” (at least, they call themselves that) but don’t conform to standard markdown as you understand it.
Consider also the parallel ‘postulate’, reluctantly introduced as an axiom in Euclid. People tried to prove it as a theorem for two thousand years, until it was realized that its negation defined entirely different kinds of geometry.
A not-quite-but-close fit: the distinction between prime and irreducible elements of a number field became necessary because unique factorization into primes failed in simple number fields.
Various theorems, lemmas, and other principles equivalent to the Axiom of Choice (e.g. Zorn’s lemma) were argued over until it was established (by Kurt Gödel and Paul Cohen) that the AoC is entirely independent of the ZF axioms, i.e. ZFC and ZF!C are both consistent systems. I think this is the canonical example.
“The Axiom of Choice is obviously true, the well-ordering principle obviously false, and who can tell about Zorn’s lemma?” — Jerry Bona