The issue is mainly that I don’t think I’m qualified to determine the outcome of the debate, Nelson and Tao are both vastly better at maths than me, so I don’t have much to go on. I suspect others are in a similar predicament.
I’ve been trying to understand their conversation and if I understand correctly Tao is right and Nelson has made a subtle and quite understandable error, but I also estimate P(my understanding is correct) < 0.5 is not very large, so this doesn’t help much, and even if I am right there could easily be something I’m missing.
I would also assign this theorem quite a low prior. Compare it to P =! NP, when a claimed proof of that comes out, mathematicians are usually highly sceptical (and usually right), even if the author is a serious mathematician like Deolalikar rather than a crank. Another example that springs to mind is Cauchy’s failed proof of Fermat’s last theorem (even though that was eventually proven, I still think a low prior was justified in both cases).
This, if correct, would be a vastly bigger result than either of those. I don’t think it would be an exaggeration to call this the single most important theorem in the history of mathematics if correct, so I think it deserves a much lower prior than them. Even more so, since in the case of P =! NP most mathematicians at least think it’s true, even if they are sceptical of most proofs, in this case most mathematicians would probably be happy to bet against it (that counts for something, even if you disagree with them).
I don’t have much money to lose at this point in my life, but I’d be happy to bet $50 and $1 that this is wrong.
I think there’s a salient difference between this and P = NP or other famous open problems. P = NP is something that thousands of people are working on and have worked on over decades, while “PA is inconsistent” is a much lonelier affair. A standard reply is that every time a mathematician proves an interesting theorem without encountering a contradiction in PA, he has given evidence for the consistency of PA. For various reasons I don’t see it that way.
Same question as for JoshuaZ: has your prior for “a contradiction in PA will be found within a hundred years” moved since Nelson’s announcement?
has your prior for “a contradiction in PA will be found within a hundred years” moved since Nelson’s announcement?
Yes, obviously P(respectable mathematician claims a contradiction | a contradiction exists) > P(respectable mathematician claims a contradiction | no contradiction exists), so it has definitely moved my estimate.
Like yours, it also moved back down when Tao responded, back up a bit when Nelson responded to him, and back down a bit more when Tao responded to him and I finally managed a coherent guess at what they were talking about.
I think there’s a salient difference between this and P = NP or other famous open problems. P = NP is something that thousands of people are working on and have worked on over decades, while “PA is inconsistent” is a much lonelier affair.
I’m not sure this is an important difference. I think scepticism about P =! NP proofs might well be just a valid even if far fewer people were working on it. If anything it would be more valid, lots of failed proofs gives you lots of chances to learn from the mistakes of others, as well as building avoiding routes which are proven not to work by others in the field. Furthermore, the fact that huge numbers of mathematicians work on P vs NP but have never claimed a proof suggests a selection effect in favour of those who do claim proofs, which is absent in the case of inconsistency.
Furthermore, not wanting to be unfair to Nelson, but the fact he’s working alone on a task most mathematicians consider a waste of time may suggest a substantial ideological axe to grind (what I have heard of him supports this thoery) and sadly it is easier to come up with a fallacious proof for something when you want it to be true.
I’m not sure if this line of debate is a productive one, the issue will be resolved one way or the other by actual mathematicians doing actual maths, not by you and me debating about priors (to put it another way, whatever the answer ends up being, this conversation will have been wasted time in retrospect).
Yes, obviously P(respectable mathematician claims a contradiction | a contradiction exists) > P(respectable mathematician claims a contradiction | no contradiction exists), so it has definitely moved my estimate.
Can you roughly quantify it? Are we talking from million-to-one to million-to-one-point-five, or from million-to-one to hundred-to-one?
I’m not sure if this line of debate is a productive one, the issue will be resolved one way or the other by actual mathematicians doing actual maths, not by you and me debating about priors (to put it another way, whatever the answer ends up being, this conversation will have been wasted time in retrospect).
Sorry if I gave you a bad impression: I am not trying to start a debate in any adversarial sense. I am just curious.
Furthermore, not wanting to be unfair to Nelson, but the fact he’s working alone on a task most mathematicians consider a waste of time may suggest a substantial ideological axe to grind (what I have heard of him supports this thoery) and sadly it is easier to come up with a fallacious proof for something when you want it to be true.
Of that there’s no doubt, but it speaks well of Nelson that he’s apparently resisted the temptation toward self-deceipt for decades, openly working on this problem the whole time.
Can you roughly quantify it? Are we talking from million-to-one to million-to-one-point-five, or from million-to-one to hundred-to-one?
The announcement came as a surprise, so the update wasn’t negligible. I probably wouldn’t have gone as low as million-to-one before, but I might have been prepared to estimate a 99.9% chance that arithmetic is consistent. However, I’m not quite sure how much of this change is a Bayesian update and how much is the fact that I got a shock and thought about the issue a lot more carefully.
From what gathered both went down a similar route at the same time, although it is possible that I am mistaken in this. I named Cauchy as he seems to be the best-known of the two, and therefore the one with the highest absolutely-positively-not-just-some-crank factor.
Anyway if you lose the bet 1 = 0 therefore you don’t need to transfer any money.
If I lose the bet (not that anyone has yet agreed to accept the bet), then payment will be based on restricted number theory without the axiom of induction.
The issue is mainly that I don’t think I’m qualified to determine the outcome of the debate, Nelson and Tao are both vastly better at maths than me, so I don’t have much to go on. I suspect others are in a similar predicament.
I’ve been trying to understand their conversation and if I understand correctly Tao is right and Nelson has made a subtle and quite understandable error, but I also estimate P(my understanding is correct) < 0.5 is not very large, so this doesn’t help much, and even if I am right there could easily be something I’m missing.
I would also assign this theorem quite a low prior. Compare it to P =! NP, when a claimed proof of that comes out, mathematicians are usually highly sceptical (and usually right), even if the author is a serious mathematician like Deolalikar rather than a crank. Another example that springs to mind is Cauchy’s failed proof of Fermat’s last theorem (even though that was eventually proven, I still think a low prior was justified in both cases).
This, if correct, would be a vastly bigger result than either of those. I don’t think it would be an exaggeration to call this the single most important theorem in the history of mathematics if correct, so I think it deserves a much lower prior than them. Even more so, since in the case of P =! NP most mathematicians at least think it’s true, even if they are sceptical of most proofs, in this case most mathematicians would probably be happy to bet against it (that counts for something, even if you disagree with them).
I don’t have much money to lose at this point in my life, but I’d be happy to bet $50 and $1 that this is wrong.
I think there’s a salient difference between this and P = NP or other famous open problems. P = NP is something that thousands of people are working on and have worked on over decades, while “PA is inconsistent” is a much lonelier affair. A standard reply is that every time a mathematician proves an interesting theorem without encountering a contradiction in PA, he has given evidence for the consistency of PA. For various reasons I don’t see it that way.
Same question as for JoshuaZ: has your prior for “a contradiction in PA will be found within a hundred years” moved since Nelson’s announcement?
Yes, obviously P(respectable mathematician claims a contradiction | a contradiction exists) > P(respectable mathematician claims a contradiction | no contradiction exists), so it has definitely moved my estimate.
Like yours, it also moved back down when Tao responded, back up a bit when Nelson responded to him, and back down a bit more when Tao responded to him and I finally managed a coherent guess at what they were talking about.
I’m not sure this is an important difference. I think scepticism about P =! NP proofs might well be just a valid even if far fewer people were working on it. If anything it would be more valid, lots of failed proofs gives you lots of chances to learn from the mistakes of others, as well as building avoiding routes which are proven not to work by others in the field. Furthermore, the fact that huge numbers of mathematicians work on P vs NP but have never claimed a proof suggests a selection effect in favour of those who do claim proofs, which is absent in the case of inconsistency.
Furthermore, not wanting to be unfair to Nelson, but the fact he’s working alone on a task most mathematicians consider a waste of time may suggest a substantial ideological axe to grind (what I have heard of him supports this thoery) and sadly it is easier to come up with a fallacious proof for something when you want it to be true.
I’m not sure if this line of debate is a productive one, the issue will be resolved one way or the other by actual mathematicians doing actual maths, not by you and me debating about priors (to put it another way, whatever the answer ends up being, this conversation will have been wasted time in retrospect).
Can you roughly quantify it? Are we talking from million-to-one to million-to-one-point-five, or from million-to-one to hundred-to-one?
Sorry if I gave you a bad impression: I am not trying to start a debate in any adversarial sense. I am just curious.
Of that there’s no doubt, but it speaks well of Nelson that he’s apparently resisted the temptation toward self-deceipt for decades, openly working on this problem the whole time.
The announcement came as a surprise, so the update wasn’t negligible. I probably wouldn’t have gone as low as million-to-one before, but I might have been prepared to estimate a 99.9% chance that arithmetic is consistent. However, I’m not quite sure how much of this change is a Bayesian update and how much is the fact that I got a shock and thought about the issue a lot more carefully.
Cauchy? Don’t you mean Lamé?
Anyway if you lose the bet 1 = 0 therefore you don’t need to transfer any money.
From what gathered both went down a similar route at the same time, although it is possible that I am mistaken in this. I named Cauchy as he seems to be the best-known of the two, and therefore the one with the highest absolutely-positively-not-just-some-crank factor.
If I lose the bet (not that anyone has yet agreed to accept the bet), then payment will be based on restricted number theory without the axiom of induction.