The Euler example raises an issue: when should be more confident about some heuristically believed claim than claims proven in the mathematical literature? For example, the proof for the classification of finite simple groups consists of hundreds of distinct papers by about as many authors. How confident should one be that that proof is actually correct and doesn’t contain serious holes? How confident should one be that we haven’t missed any finite simple groups? I’m substantially more confident that no group has been missed (>99%?) but much less so in the validity of the proof. Is this the correct approach?
Then there are statements which simply look extremely likely. Let’s take for example “White has a winning strategy in chess if black has to play down a queen”. How confident should one be for this sort of statement? If someone said they had a proof that this was false, what would it take to convince one that the proof was valid? It would seem to take a lot more than most mathematical facts, but how much so, and can we articulate why?
Note incidentally that there are a variety of conjectures that are currently believed for reasons close to Euler’s reasoning. For example, P = BPP is believed because we have a great deal of different statements that all imply it. Similarly, the Riemann hypothesis is widely believed due to a combination of partial results (a positive fraction of zeros must be on the line, almost all zeros must be near the line, the first few billion zeros are on the line, a random model of the Mobius function implies RH, etc.), but how confident should we be in such conjectures?
The question of whether there is a missing finite simple group is a precise question. But what does it mean for a natural language proof to be valid? Typically a proof contains many precise lemmas and one could ask that these statements are correct (though this leaves the question of whether they prove the theorem), but lots of math papers contain lemmas that are false as stated, but where the paper would be considered salvageable if anyone noticed.
Similarly, the Riemann hypothesis is widely believed due to a combination of partial results (a positive fraction of zeros must be on the line, almost all zeros must be near the line, the first few billion zeros are on the line, a random model of the Mobius function implies RH, etc.)
This is a very standard list of evidence, but I am skeptical that it reflects how mathematicians judge the evidence. I think that of the items you mention, the random model is by far the most important. The study of small zeros is also relevant. But I don’t think that the theorems about infinitely many zeros have much effect on the judgement.
This is a very standard list of evidence, but I am skeptical that it reflects how mathematicians judge the evidence. I think that of the items you mention, the random model is by far the most important. The study of small zeros is also relevant. But I don’t think that the theorems about infinitely many zeros have much effect on the judgement.
I agree with this, but would I cite the empirical truth of the RH for other global zeta functions, as well as the proof of the Weil conjectures, as evidence that mathematicians actually think about.
I’d be interested in corresponding a bit — shoot me an email if you’d like
I agree with this, but would I cite the empirical truth of the RH for other global zeta functions
An interesting thing about the GRH is that at oft neglected piece of evidence against it is how the Siegal zero seems to behave like a real object, i.e., having consistent properties.
An interesting thing about the GRH is that at oft neglected piece of evidence against it is how the Siegal zero seems to behave like a real object, i.e., having consistent properties.
I’m not sure what you mean here. If it didn’t have consistent properties we could show it doesn’t exist. Everything looks consistent up until the point you show it isn’t real. Do you mean that it has properties that don’t look that implausible? That seems like a different argument.
The classification of finite simple groups is a very telling example because it was incorrectly believed to be finished back in the 1980s, but wasn’t actually finished till 2004. There are many other examples of widely accepted “results” that didn’t measure up, some of which were not merely unproven but actively incorrect. For instance, John von Neumann’s “proof” that there was no hidden variables theory of quantum mechanics was widely cited at least into the 1980s, 30 years after David Bohm had in fact constructed exactly such a theory.
I suspect we should not be at all confident that the Riemann hypothesis is true given current evidence. There are some reasons to believe it might be false, and the reasons you cite aren’t strong evidence that it is true. Given that this is math, not reality, there is an infinite space to search of values infinitely larger than any we have attempted. There are also many examples of hypotheses that were widely believed to be true in math until a counterexample was found.
P != NP is another example of a majority accepted hypothesis with somewhat stronger evidence in the physical world than the Riemann hypothesis has. Yet there are respected professional mathematicians (as well as unrespected amateurs like myself) who bet the other way. I would ask for odds though. :-)
I won’t pretend to be able to reproduce it here. You can find the original in English translation in von Neumann’s Mathematical Foundations of Quantum Mechanics. According to Wikipedia,
Von Neumann’s abstract treatment permitted him also to confront the foundational issue of determinism vs. non-determinism and in the book he presented a proof according to which quantum mechanics could not possibly be derived by statistical approximation from a deterministic theory of the type used in classical mechanics. In 1966, a paper by John Bell was published, claiming that this proof contained a conceptual error and was therefore invalid (see the article on John Stewart Bell for more information). However, in 2010, Jeffrey Bub published an argument that Bell misconstrued von Neumann’s proof, and that it is actually not flawed, after all.[25]
So apparently we’re still trying to figure out if this proof is acceptable or not. Note, however, that Bub’s claim is that the proof didn’t actually say what everyone thought it said, not that Bohm was wrong. Thus we have another possible failure mode: a correct proof that doesn’t say what people think it says.
This is not actually as uncommon as it should be, and goes way beyond math. There are many examples of well-known “facts” for which numerous authoritative citations can be produced, but that are in reality false. For example, the lighthouse and aircraft carrier story is in fact false, despite “appearing in a 1987 issue of Proceedings, a publication of the U.S. Naval Institute.”
Of course, as I type this I notice that I haven’t personally verified that the 1987 issue of Proceedings says what Stephen Covey’s The Seven Habits of Highly Effective People, the secondary source that cited it, says it says. This is how bad sources work their way into the literature. Too often authors copy citations from each other without going back to the original. How many of us know about experiments like Robbers Cave or Stanford Prison only from HpMOR? What’s the chance we’ve explained it to others, but gotten crucial details wrong?
I’ve just seen the claim that von Neumann had a fake proof in a couple places, and it always bothers me, since it seems to me like one can construct a hidden variable theory that explains any set of statistical predictions. Just have the hidden variables be the response to every possible measurement! Or various equivalent schemes. One needs a special condition on the type of hidden variable theory, like Bell’s nonlocality.
The Euler example raises an issue: when should be more confident about some heuristically believed claim than claims proven in the mathematical literature? For example, the proof for the classification of finite simple groups consists of hundreds of distinct papers by about as many authors. How confident should one be that that proof is actually correct and doesn’t contain serious holes? How confident should one be that we haven’t missed any finite simple groups? I’m substantially more confident that no group has been missed (>99%?) but much less so in the validity of the proof. Is this the correct approach?
Then there are statements which simply look extremely likely. Let’s take for example “White has a winning strategy in chess if black has to play down a queen”. How confident should one be for this sort of statement? If someone said they had a proof that this was false, what would it take to convince one that the proof was valid? It would seem to take a lot more than most mathematical facts, but how much so, and can we articulate why?
Note incidentally that there are a variety of conjectures that are currently believed for reasons close to Euler’s reasoning. For example, P = BPP is believed because we have a great deal of different statements that all imply it. Similarly, the Riemann hypothesis is widely believed due to a combination of partial results (a positive fraction of zeros must be on the line, almost all zeros must be near the line, the first few billion zeros are on the line, a random model of the Mobius function implies RH, etc.), but how confident should we be in such conjectures?
The question of whether there is a missing finite simple group is a precise question. But what does it mean for a natural language proof to be valid? Typically a proof contains many precise lemmas and one could ask that these statements are correct (though this leaves the question of whether they prove the theorem), but lots of math papers contain lemmas that are false as stated, but where the paper would be considered salvageable if anyone noticed.
This is a very standard list of evidence, but I am skeptical that it reflects how mathematicians judge the evidence. I think that of the items you mention, the random model is by far the most important. The study of small zeros is also relevant. But I don’t think that the theorems about infinitely many zeros have much effect on the judgement.
I agree with this, but would I cite the empirical truth of the RH for other global zeta functions, as well as the proof of the Weil conjectures, as evidence that mathematicians actually think about.
I’d be interested in corresponding a bit — shoot me an email if you’d like
An interesting thing about the GRH is that at oft neglected piece of evidence against it is how the Siegal zero seems to behave like a real object, i.e., having consistent properties.
I’m not sure what you mean here. If it didn’t have consistent properties we could show it doesn’t exist. Everything looks consistent up until the point you show it isn’t real. Do you mean that it has properties that don’t look that implausible? That seems like a different argument.
If we can easily prove a conjecture except for some seemingly arbitrary case, that’s evidence for the conjecture being false in that case.
The classification of finite simple groups is a very telling example because it was incorrectly believed to be finished back in the 1980s, but wasn’t actually finished till 2004. There are many other examples of widely accepted “results” that didn’t measure up, some of which were not merely unproven but actively incorrect. For instance, John von Neumann’s “proof” that there was no hidden variables theory of quantum mechanics was widely cited at least into the 1980s, 30 years after David Bohm had in fact constructed exactly such a theory.
I suspect we should not be at all confident that the Riemann hypothesis is true given current evidence. There are some reasons to believe it might be false, and the reasons you cite aren’t strong evidence that it is true. Given that this is math, not reality, there is an infinite space to search of values infinitely larger than any we have attempted. There are also many examples of hypotheses that were widely believed to be true in math until a counterexample was found.
P != NP is another example of a majority accepted hypothesis with somewhat stronger evidence in the physical world than the Riemann hypothesis has. Yet there are respected professional mathematicians (as well as unrespected amateurs like myself) who bet the other way. I would ask for odds though. :-)
BTW what was John von Neumann’s “proof”?
I won’t pretend to be able to reproduce it here. You can find the original in English translation in von Neumann’s Mathematical Foundations of Quantum Mechanics. According to Wikipedia,
So apparently we’re still trying to figure out if this proof is acceptable or not. Note, however, that Bub’s claim is that the proof didn’t actually say what everyone thought it said, not that Bohm was wrong. Thus we have another possible failure mode: a correct proof that doesn’t say what people think it says.
This is not actually as uncommon as it should be, and goes way beyond math. There are many examples of well-known “facts” for which numerous authoritative citations can be produced, but that are in reality false. For example, the lighthouse and aircraft carrier story is in fact false, despite “appearing in a 1987 issue of Proceedings, a publication of the U.S. Naval Institute.”
Of course, as I type this I notice that I haven’t personally verified that the 1987 issue of Proceedings says what Stephen Covey’s The Seven Habits of Highly Effective People, the secondary source that cited it, says it says. This is how bad sources work their way into the literature. Too often authors copy citations from each other without going back to the original. How many of us know about experiments like Robbers Cave or Stanford Prison only from HpMOR? What’s the chance we’ve explained it to others, but gotten crucial details wrong?
I’ve just seen the claim that von Neumann had a fake proof in a couple places, and it always bothers me, since it seems to me like one can construct a hidden variable theory that explains any set of statistical predictions. Just have the hidden variables be the response to every possible measurement! Or various equivalent schemes. One needs a special condition on the type of hidden variable theory, like Bell’s nonlocality.
Very nice comment.