You make the common error of viewing the answer as binary. A proper rationalist would be assigning probabilities, not making a binary decision. In the one-in-a-million example, your friend thinks he has the right answer, and it is because he thinks he is very probably right that he is irrational. In the P=NP example, I do not have any such certainty.
I would imagine that a review of P=NP in the manner you describe probably wouldn’t push me too far from 50⁄50 in either direction. If it pushed me to 95⁄5, I’d have to discredit my own analysis, since people who are much better at math than I am have put a lot more thought into it than I have, and they still disagree.
Now, imagine someone comes up with an insight so good that all mathematicians agree P=NP. That would obviously change my certainty, even if I couldn’t understand the insight. I would go from a rationally calibrated ~50/50 to a rationally calibrated ~99.99/.01 or something close. Thus, that insight certainly would be evidence.
That said, you do raise an interesting issue about meta-uncertainty that I’m still mulling over.
ETA: P=NP was a very hypothetical example about which I know pretty much nothing. I also forgot the fun property of mathematics that you can have the right answer with near certainty, but no one cares if you can’t prove it formally. My actual point was about thinking answers are inherently binary. The mistake of irrational actor who has ineffective tools seems to be his confidence in his wrong answer, not the wrong answer itself.
In a 2002 poll of 100 researchers, 61 believed the answer is no, 9 believed the answer is yes, 22 were unsure, and 8 believed the question may be independent of the currently accepted axioms, and so impossible to prove or disprove.
There is a known concrete algorithm for every NP-complete problem that solves that problem in polynomial time if P=NP:
Generate all algorithms and run algorithm n in 1/2^n fraction of the time, check the result of algorithm n if it stops and output the result if correct.
Nice! More explicitly: if the polynomial-time algorithm is at (constant) index K in our enumeration of all algorithms, we’d need about R*2^K steps of the meta-algorithm to run R steps of the algorithm K. Thus, if the algorithm K is bound by polynomial P(n) in problem size n, it’d take P(n)*2^K steps of the meta-algorithm (polynomial in n, K is a constant) to solve the problem of size n.
Wouldn’t that imply P != NP since otherwise there would be a counterexample?
No. It could be that there is an algorithm that solves some NP-complete problem in polynomial time, yet there is no proof that it does so. We could even find ourselves in the position of having discovered an algorithm that runs remarkably fast on all instances it’s applied to, practically enough to trash public-key cryptography, yet although it is in P we cannot prove it is, or even that it works.
You’re (all) right, of course, there are several mathematicians who refuse to have an opinion on whether P = NP, and a handful who take the minority view (although of the 8 who did so in Gasarch’s survey ‘some’ admitted they were doing it just to be contrary, that really doesn’t leave many who actually believed P=NP).
What this definitively does not mean is that it’s rational to assign 50% probability to each side my main point was that there is ample evidence to suggest that P != NP (see the Scott Aaronson post I linked to above) and a strong consensus in the community that P!=NP. To insist that one should assign 50% of one’s probability to the possibility that P=NP is just plain wrong. If nothing else, Aaronson’s “self-referential” argument should be enough to convince most people here that P is probably a strict subset of NP.
You make the common error of viewing the answer as binary. A proper rationalist would be assigning probabilities, not making a binary decision. In the one-in-a-million example, your friend thinks he has the right answer, and it is because he thinks he is very probably right that he is irrational. In the P=NP example, I do not have any such certainty.
I would imagine that a review of P=NP in the manner you describe probably wouldn’t push me too far from 50⁄50 in either direction. If it pushed me to 95⁄5, I’d have to discredit my own analysis, since people who are much better at math than I am have put a lot more thought into it than I have, and they still disagree.
Now, imagine someone comes up with an insight so good that all mathematicians agree P=NP. That would obviously change my certainty, even if I couldn’t understand the insight. I would go from a rationally calibrated ~50/50 to a rationally calibrated ~99.99/.01 or something close. Thus, that insight certainly would be evidence.
That said, you do raise an interesting issue about meta-uncertainty that I’m still mulling over.
ETA: P=NP was a very hypothetical example about which I know pretty much nothing. I also forgot the fun property of mathematics that you can have the right answer with near certainty, but no one cares if you can’t prove it formally. My actual point was about thinking answers are inherently binary. The mistake of irrational actor who has ineffective tools seems to be his confidence in his wrong answer, not the wrong answer itself.
All mathematicians already agree that P != NP. I’m not sure quite how much more of a consensus you could ask for on an unsolved maths problem.
(see, e.g., Lance Fortnow or Scott Aaronson)
Wouldn’t that imply P != NP since otherwise there would be a counterexample?
There is a known concrete algorithm for every NP-complete problem that solves that problem in polynomial time if P=NP:
Generate all algorithms and run algorithm n in 1/2^n fraction of the time, check the result of algorithm n if it stops and output the result if correct.
Nice! More explicitly: if the polynomial-time algorithm is at (constant) index K in our enumeration of all algorithms, we’d need about R*2^K steps of the meta-algorithm to run R steps of the algorithm K. Thus, if the algorithm K is bound by polynomial P(n) in problem size n, it’d take P(n)*2^K steps of the meta-algorithm (polynomial in n, K is a constant) to solve the problem of size n.
No. It could be that there is an algorithm that solves some NP-complete problem in polynomial time, yet there is no proof that it does so. We could even find ourselves in the position of having discovered an algorithm that runs remarkably fast on all instances it’s applied to, practically enough to trash public-key cryptography, yet although it is in P we cannot prove it is, or even that it works.
You mean, a substantial majority of sane and brilliant mathematicians. Never abuse a universal quantifier when math is on the line!
You’re (all) right, of course, there are several mathematicians who refuse to have an opinion on whether P = NP, and a handful who take the minority view (although of the 8 who did so in Gasarch’s survey ‘some’ admitted they were doing it just to be contrary, that really doesn’t leave many who actually believed P=NP).
What this definitively does not mean is that it’s rational to assign 50% probability to each side my main point was that there is ample evidence to suggest that P != NP (see the Scott Aaronson post I linked to above) and a strong consensus in the community that P!=NP. To insist that one should assign 50% of one’s probability to the possibility that P=NP is just plain wrong. If nothing else, Aaronson’s “self-referential” argument should be enough to convince most people here that P is probably a strict subset of NP.
Not all of them.
Just because there are some that disagree doesn’t mean we must assign 50% probability to each case.
I wasn’t claiming that we must assign 50% probability to each case.