the questions where this seems to be most pronounced are mathematical questions that are close to my area of expertise (such as whether P = NP)
On a tangential note, exactly how close is this to your area of expertise? In my experience it tends to be mathematicians who are in related areas but don’t actually work on complexity theory directly who insist on being agnostic about P vs NP—almost all complexity theorists are pretty much completely convinced of even stronger complexity theoretic assumptions (eg, I bet Scott Aaronson would give pretty good odds on BQP != NP).
I’m not entirely sure how tangential this is, as it seems to suggest that there may be some sort of sweet point of expertise (at least on this question) - any layman would take my word for it that P != NP, most non-CS mathematicians would refuse to have an opinion and most complexity theorists are convinced of its truth for their own reasons. I guess this might be something that’s unique to mathematics, with its insistence on formal proof as a standard of truth, can anyone thing of anything similar in other fields?
That’s a valid point. My own area is algebraic number theory but I have some interest in complexity theory, probably more interest than most number theorists and some of my work has certainly touched on complexity issues. I’m not at all agnostic about P=NP; I assign about a 95% chance that P !=NP, I think that is outside the “agnostic” zone by most standards.
Not “almost all are completely convinced”; according to this poll, 61 supposed experts “thought P != NP” (which does not imply that they would bet their house on it), 9 thought the opposite and 22 offered no opinion (the author writes that he asked “theorists”, partly people he knew, but also partly by posting to mailing lists—I’m pretty sure he filtered out the crackpots and that enough of the rest are really people working in the area)
Even that case wouldn’t increase the likelyhood of P != NP to 1-epsilon, as experts have been wrong in past and their greater confidence could stem from more reinforcement through groupthink or greater exposition to things they simply understand wrong rather than a better overview. Somewhere in Eliezers posts, a study is referenced where something happens only in 70 % of the cases when an expert says that he is 99 % sure; in another referenced study, people raised their subjective confidence in something vastly more than they actually changed their mind when they got greater exposition to an issue which means that the experts confidence doesn’t prove much more than the non-experts (who had light exposition to an issue) confidence.
On a tangential note, exactly how close is this to your area of expertise? In my experience it tends to be mathematicians who are in related areas but don’t actually work on complexity theory directly who insist on being agnostic about P vs NP—almost all complexity theorists are pretty much completely convinced of even stronger complexity theoretic assumptions (eg, I bet Scott Aaronson would give pretty good odds on BQP != NP).
I’m not entirely sure how tangential this is, as it seems to suggest that there may be some sort of sweet point of expertise (at least on this question) - any layman would take my word for it that P != NP, most non-CS mathematicians would refuse to have an opinion and most complexity theorists are convinced of its truth for their own reasons. I guess this might be something that’s unique to mathematics, with its insistence on formal proof as a standard of truth, can anyone thing of anything similar in other fields?
That’s a valid point. My own area is algebraic number theory but I have some interest in complexity theory, probably more interest than most number theorists and some of my work has certainly touched on complexity issues. I’m not at all agnostic about P=NP; I assign about a 95% chance that P !=NP, I think that is outside the “agnostic” zone by most standards.
Not “almost all are completely convinced”; according to this poll, 61 supposed experts “thought P != NP” (which does not imply that they would bet their house on it), 9 thought the opposite and 22 offered no opinion (the author writes that he asked “theorists”, partly people he knew, but also partly by posting to mailing lists—I’m pretty sure he filtered out the crackpots and that enough of the rest are really people working in the area)
Even that case wouldn’t increase the likelyhood of P != NP to 1-epsilon, as experts have been wrong in past and their greater confidence could stem from more reinforcement through groupthink or greater exposition to things they simply understand wrong rather than a better overview. Somewhere in Eliezers posts, a study is referenced where something happens only in 70 % of the cases when an expert says that he is 99 % sure; in another referenced study, people raised their subjective confidence in something vastly more than they actually changed their mind when they got greater exposition to an issue which means that the experts confidence doesn’t prove much more than the non-experts (who had light exposition to an issue) confidence.