I think there’s a salient difference between this and P = NP or other famous open problems. P = NP is something that thousands of people are working on and have worked on over decades, while “PA is inconsistent” is a much lonelier affair. A standard reply is that every time a mathematician proves an interesting theorem without encountering a contradiction in PA, he has given evidence for the consistency of PA. For various reasons I don’t see it that way.
Same question as for JoshuaZ: has your prior for “a contradiction in PA will be found within a hundred years” moved since Nelson’s announcement?
has your prior for “a contradiction in PA will be found within a hundred years” moved since Nelson’s announcement?
Yes, obviously P(respectable mathematician claims a contradiction | a contradiction exists) > P(respectable mathematician claims a contradiction | no contradiction exists), so it has definitely moved my estimate.
Like yours, it also moved back down when Tao responded, back up a bit when Nelson responded to him, and back down a bit more when Tao responded to him and I finally managed a coherent guess at what they were talking about.
I think there’s a salient difference between this and P = NP or other famous open problems. P = NP is something that thousands of people are working on and have worked on over decades, while “PA is inconsistent” is a much lonelier affair.
I’m not sure this is an important difference. I think scepticism about P =! NP proofs might well be just a valid even if far fewer people were working on it. If anything it would be more valid, lots of failed proofs gives you lots of chances to learn from the mistakes of others, as well as building avoiding routes which are proven not to work by others in the field. Furthermore, the fact that huge numbers of mathematicians work on P vs NP but have never claimed a proof suggests a selection effect in favour of those who do claim proofs, which is absent in the case of inconsistency.
Furthermore, not wanting to be unfair to Nelson, but the fact he’s working alone on a task most mathematicians consider a waste of time may suggest a substantial ideological axe to grind (what I have heard of him supports this thoery) and sadly it is easier to come up with a fallacious proof for something when you want it to be true.
I’m not sure if this line of debate is a productive one, the issue will be resolved one way or the other by actual mathematicians doing actual maths, not by you and me debating about priors (to put it another way, whatever the answer ends up being, this conversation will have been wasted time in retrospect).
Yes, obviously P(respectable mathematician claims a contradiction | a contradiction exists) > P(respectable mathematician claims a contradiction | no contradiction exists), so it has definitely moved my estimate.
Can you roughly quantify it? Are we talking from million-to-one to million-to-one-point-five, or from million-to-one to hundred-to-one?
I’m not sure if this line of debate is a productive one, the issue will be resolved one way or the other by actual mathematicians doing actual maths, not by you and me debating about priors (to put it another way, whatever the answer ends up being, this conversation will have been wasted time in retrospect).
Sorry if I gave you a bad impression: I am not trying to start a debate in any adversarial sense. I am just curious.
Furthermore, not wanting to be unfair to Nelson, but the fact he’s working alone on a task most mathematicians consider a waste of time may suggest a substantial ideological axe to grind (what I have heard of him supports this thoery) and sadly it is easier to come up with a fallacious proof for something when you want it to be true.
Of that there’s no doubt, but it speaks well of Nelson that he’s apparently resisted the temptation toward self-deceipt for decades, openly working on this problem the whole time.
Can you roughly quantify it? Are we talking from million-to-one to million-to-one-point-five, or from million-to-one to hundred-to-one?
The announcement came as a surprise, so the update wasn’t negligible. I probably wouldn’t have gone as low as million-to-one before, but I might have been prepared to estimate a 99.9% chance that arithmetic is consistent. However, I’m not quite sure how much of this change is a Bayesian update and how much is the fact that I got a shock and thought about the issue a lot more carefully.
I think there’s a salient difference between this and P = NP or other famous open problems. P = NP is something that thousands of people are working on and have worked on over decades, while “PA is inconsistent” is a much lonelier affair. A standard reply is that every time a mathematician proves an interesting theorem without encountering a contradiction in PA, he has given evidence for the consistency of PA. For various reasons I don’t see it that way.
Same question as for JoshuaZ: has your prior for “a contradiction in PA will be found within a hundred years” moved since Nelson’s announcement?
Yes, obviously P(respectable mathematician claims a contradiction | a contradiction exists) > P(respectable mathematician claims a contradiction | no contradiction exists), so it has definitely moved my estimate.
Like yours, it also moved back down when Tao responded, back up a bit when Nelson responded to him, and back down a bit more when Tao responded to him and I finally managed a coherent guess at what they were talking about.
I’m not sure this is an important difference. I think scepticism about P =! NP proofs might well be just a valid even if far fewer people were working on it. If anything it would be more valid, lots of failed proofs gives you lots of chances to learn from the mistakes of others, as well as building avoiding routes which are proven not to work by others in the field. Furthermore, the fact that huge numbers of mathematicians work on P vs NP but have never claimed a proof suggests a selection effect in favour of those who do claim proofs, which is absent in the case of inconsistency.
Furthermore, not wanting to be unfair to Nelson, but the fact he’s working alone on a task most mathematicians consider a waste of time may suggest a substantial ideological axe to grind (what I have heard of him supports this thoery) and sadly it is easier to come up with a fallacious proof for something when you want it to be true.
I’m not sure if this line of debate is a productive one, the issue will be resolved one way or the other by actual mathematicians doing actual maths, not by you and me debating about priors (to put it another way, whatever the answer ends up being, this conversation will have been wasted time in retrospect).
Can you roughly quantify it? Are we talking from million-to-one to million-to-one-point-five, or from million-to-one to hundred-to-one?
Sorry if I gave you a bad impression: I am not trying to start a debate in any adversarial sense. I am just curious.
Of that there’s no doubt, but it speaks well of Nelson that he’s apparently resisted the temptation toward self-deceipt for decades, openly working on this problem the whole time.
The announcement came as a surprise, so the update wasn’t negligible. I probably wouldn’t have gone as low as million-to-one before, but I might have been prepared to estimate a 99.9% chance that arithmetic is consistent. However, I’m not quite sure how much of this change is a Bayesian update and how much is the fact that I got a shock and thought about the issue a lot more carefully.