I raise (at least) two different related points in my post:
“When an argument seems very likely to be wrong but could be right with non-negligible probability, classify it as such, rather than classifying it as false.” I think that you’re pretty good on this point, and better than I had been.
The other is one that you didn’t mention in your comment, and one that I believe that you and I have both largely missed in the past. This is that one doesn’t need a relatively strong argument to be confident in a subtle judgment call — all that one needs is ~4-8 independent weak arguments. (Note that generating and keeping track of these isn’t computationally infeasible.) This is a very crucial point, as it opens up the possibility of no longer needing to rely on single relatively strong arguments that aren’t actually too strong.
I believe that the point in #2 is closely related to what people call “common sense” or “horse sense” or “physicists’ intuition.” In the past, I had thought that “common sense” meant, specifically, “don’t deviate too much from conventional wisdom, because views that are far from mainstream are usually wrong.” Now I realize that it refers to something quite a bit deeper, and not specifically about conventional wisdom.
I’d suggest talking about these things with miruto.
On many questions in physics, ecology, etc., there’s a single factor that dominates all the rest. Maybe this is less true in human domains because rational agents tend to produce efficiencies due to eating up the free lunches.
Our chauffeur from last weekend has recently been telling to me that physicists generally use the “many weak arguments” approach.
For example, the math used in quantum field theory remains without a rigorous foundation, and its discovery was analogous to Euler’s heuristic reasoning concerning the product formula for the sine function.
He also referred to scenarios in which (roughly speaking) you have a physical system with many undetermined parameters, and you have ways of bounding different collections of them and that by looking at all of the resulting bounds, you can bound the individual parameters sufficiently tightly so that the whole model is accurate.
Cool. Yes, many examples of #1 come to mind. As far as #2, I don’t believe I had thought of this as a principle specifically.
What I meant about a single factor dominating in physics was that in most cases, even when multiple factors are at play, one of those factors matters more than all the rest such that you can ignore the rest. For example, an electron has gravitational attraction to the atomic nucleus, but this is trivial compared with the electromagnetic attraction. Similarly, the electromagnetic repulsion of the protons in the nucleus is trivial compared with the strong force holding them together. It’s rare in nature to have a close competition between competing forces, at least until you get to higher-level domains like inter-agent competition.
What I meant about a single factor dominating in physics was that in most cases, even when multiple factors are at play, one of those factors matters more than all the rest such that you can ignore the rest. For example, an electron has gravitational attraction to the atomic nucleus, but this is trivial compared with the electromagnetic attraction. Similarly, the electromagnetic repulsion of the protons in the nucleus is trivial compared with the strong force holding them together. It’s rare in nature to have a close competition between competing forces, at least until you get to higher-level domains like inter-agent competition.
Yes, I agree with this. My comments were about the sort of work that physicists do, as opposed to the relative significance of different physical forces in analyzing physical systems
I found this unhelpful because I do math and frequently do non-rigorous math reasoning, which seems to me to have much more of a ORSA argument flavor. Or maybe like two arguments—“here is a beautiful picture, it is correct in one trivial case, therefore it is correct everywhere”. My previous understanding was that physicists non-rigorous math reasoning was much like mathematician’s non-rigorous math reasoning, and that my non-rigorous math reasoning is typical. So to accept this claim I have to change some belief.
Do you think that the specific example of Euler and the Basel Problem doesn’t count as an example of the use of MWAs? If so, I don’t necessarily disagree, but I think it’s closer to MWAs than most mathematical work is, and may be representative of the sort of reasoning that physicists use.
There might just be a terminological distinction here. When I think of the reasoning used by mathematicians/physicists, I think of the reasoning used to guess what is true—in particular to produce a theory with >50% confidence. I don’t think as much of the reasoning used to get you from >50% to >99%, because this is relatively superfluous for a mathematician’s utility function—at best, it doubles your efficiency in proving theorems. Whereas you are concerned more with getting >99%.
This is sort of a stupid point but Euler’s argument does not have very many parts, and the parts themselves are relatively strong. Note that if you take away the first, conceptual point, the argument is not very convincing at all—although this depends on how much calculation of how many even zeta values Euler does. It’s still a pretty far cry from the arguments frequently used in the human world.
Finally, while I can see why Euler’s reasoning may be representative of the sort of reasoning that physicists use, I would like to see more evidence that it is representative. If all you have is the advice of this chauffer, that’s perfectly alright and I will go do something else.
Finally, while I can see why Euler’s reasoning may be representative of the sort of reasoning that physicists use, I would like to see more evidence that it is representative. If all you have is the advice of this chauffer, that’s perfectly alright and I will go do something else.
I don’t have much more evidence, but I think that it’s significant that:
Physicists developed quantum field theory in the 1950′s, and that it still hasn’t been made mathematically rigorous, despite the fact that, e.g., Richard Borcherds appears to have spent 15 years (!!) trying.
The mathematicians who I know who have studied quantum field theory have indicated that they don’t understand how physicists came up with the methods that they did.
These suggest that the physicists who invented this theory reasoned in a very different way from how mathematiicans usually do.
These suggest that the physicists who invented this theory reasoned in a very different way from how mathematiicans usually do.
That part seems obvious: physicists treat math as a tool, it does not need to be perfect to get the job done. It can be inconsistent, self-contradictory, use techniques way outside of their original realm of applicability, remove infinities by fiat, anything goes, as long as it works. Of course, physicists do prefer fine tools polished to perfection, and complain when they aren’t, but will use them, anyway. And invent and build new crude ones when there is nothing available off the shelf.
As a mathematician, it’s possible to get an impression of the type “physicists’ reasoning isn’t rigorous, because they don’t use, e.g. epsilon delta proofs of theorems involving limits, and the reasoning is like a sloppier version of mathematical reasoning.”
The real situation is closer to “physicists dream up highly nontrivial things that are true, that virtually no mathematicians would have been able to come up with without knowledge of physics, and that mathematicians don’t understand sufficiently well to be able to prove even after dozens of years of reflection.”
But mathematicians also frequently dream up highly nontrivial things that are true, that mathematicians (and physicists) don’t understand sufficiently well to be able to prove even after dozens of years of reflection. The Riemann hypothesis is almost three times as old as quantum field theory. There are also the Langlands conjectures, Hodge conjecture, etc., etc. So it’s not clear that something fundamentally different is going on here.
I think that the Langlands program is an example: it constitutes a synthesis of many known number theoretic phenomena that collectively hinted at some general structure: they can be thought of “many weak arguments” for the general conjectures
But the work of Langlands, Shimura, Grothendieck and Deligne should be distinguished between the sort of work that most mathematicians do most of the time, which tends to be significantly more skewed toward deduction.
From what I’ve heard, quantum field theory allows one to accurately predict certain physical constants to 8 decimal places, with the reasons why the computations work very unclear. But I know essentially nothing about this. As I said, I can connect you with my friend for details.
No, but my impression is that the physics culture has been more influenced by the MWA style than mathematical culture has. In particular, my impression is that most physicists understand “the big picture” (which has been figured out by using MWAs) whereas in my experience, most mathematicians are pretty focused on individual research problems.
As a tangent, I think it’s relatively clear both how physicists tend to think differently from mathematicians, and how they came up with path-integration-like techniques in QFT. In both math and physics, researchers will come up with an idea based on intuition, and then verify the idea appropriately. In math the correct notion of verification is proof; in physics it’s experimentation (with proof an acceptable second). This method of verification has a cognitive feedback loop to how the researcher’s intuition works. In particular physicists have intuition that’s based on physical intuition and (generally) a thoroughly imprecise understanding of math, so that from this perspective, using integral-like techniques without any established mathematical underpinnings is intuitively completely plausible. Mathematicians would shirk away from this almost immediately as their intuition would hit the brick wall of “no theoretical foundation”.
I raise (at least) two different related points in my post:
“When an argument seems very likely to be wrong but could be right with non-negligible probability, classify it as such, rather than classifying it as false.” I think that you’re pretty good on this point, and better than I had been.
The other is one that you didn’t mention in your comment, and one that I believe that you and I have both largely missed in the past. This is that one doesn’t need a relatively strong argument to be confident in a subtle judgment call — all that one needs is ~4-8 independent weak arguments. (Note that generating and keeping track of these isn’t computationally infeasible.) This is a very crucial point, as it opens up the possibility of no longer needing to rely on single relatively strong arguments that aren’t actually too strong.
I believe that the point in #2 is closely related to what people call “common sense” or “horse sense” or “physicists’ intuition.” In the past, I had thought that “common sense” meant, specifically, “don’t deviate too much from conventional wisdom, because views that are far from mainstream are usually wrong.” Now I realize that it refers to something quite a bit deeper, and not specifically about conventional wisdom.
I’d suggest talking about these things with miruto.
Our chauffeur from last weekend has recently been telling to me that physicists generally use the “many weak arguments” approach.
For example, the math used in quantum field theory remains without a rigorous foundation, and its discovery was analogous to Euler’s heuristic reasoning concerning the product formula for the sine function.
He also referred to scenarios in which (roughly speaking) you have a physical system with many undetermined parameters, and you have ways of bounding different collections of them and that by looking at all of the resulting bounds, you can bound the individual parameters sufficiently tightly so that the whole model is accurate.
Cool. Yes, many examples of #1 come to mind. As far as #2, I don’t believe I had thought of this as a principle specifically.
What I meant about a single factor dominating in physics was that in most cases, even when multiple factors are at play, one of those factors matters more than all the rest such that you can ignore the rest. For example, an electron has gravitational attraction to the atomic nucleus, but this is trivial compared with the electromagnetic attraction. Similarly, the electromagnetic repulsion of the protons in the nucleus is trivial compared with the strong force holding them together. It’s rare in nature to have a close competition between competing forces, at least until you get to higher-level domains like inter-agent competition.
Yes, I agree with this. My comments were about the sort of work that physicists do, as opposed to the relative significance of different physical forces in analyzing physical systems
Do you have/are you planning an argument demonstrating in detail use of MWA in physics?
I don’t know physics, but I think that my post on Euler and the Basel Problem gives a taste of it.
I found this unhelpful because I do math and frequently do non-rigorous math reasoning, which seems to me to have much more of a ORSA argument flavor. Or maybe like two arguments—“here is a beautiful picture, it is correct in one trivial case, therefore it is correct everywhere”. My previous understanding was that physicists non-rigorous math reasoning was much like mathematician’s non-rigorous math reasoning, and that my non-rigorous math reasoning is typical. So to accept this claim I have to change some belief.
Do you think that the specific example of Euler and the Basel Problem doesn’t count as an example of the use of MWAs? If so, I don’t necessarily disagree, but I think it’s closer to MWAs than most mathematical work is, and may be representative of the sort of reasoning that physicists use.
There might just be a terminological distinction here. When I think of the reasoning used by mathematicians/physicists, I think of the reasoning used to guess what is true—in particular to produce a theory with >50% confidence. I don’t think as much of the reasoning used to get you from >50% to >99%, because this is relatively superfluous for a mathematician’s utility function—at best, it doubles your efficiency in proving theorems. Whereas you are concerned more with getting >99%.
This is sort of a stupid point but Euler’s argument does not have very many parts, and the parts themselves are relatively strong. Note that if you take away the first, conceptual point, the argument is not very convincing at all—although this depends on how much calculation of how many even zeta values Euler does. It’s still a pretty far cry from the arguments frequently used in the human world.
Finally, while I can see why Euler’s reasoning may be representative of the sort of reasoning that physicists use, I would like to see more evidence that it is representative. If all you have is the advice of this chauffer, that’s perfectly alright and I will go do something else.
I don’t have much more evidence, but I think that it’s significant that:
Physicists developed quantum field theory in the 1950′s, and that it still hasn’t been made mathematically rigorous, despite the fact that, e.g., Richard Borcherds appears to have spent 15 years (!!) trying.
The mathematicians who I know who have studied quantum field theory have indicated that they don’t understand how physicists came up with the methods that they did.
These suggest that the physicists who invented this theory reasoned in a very different way from how mathematiicans usually do.
That part seems obvious: physicists treat math as a tool, it does not need to be perfect to get the job done. It can be inconsistent, self-contradictory, use techniques way outside of their original realm of applicability, remove infinities by fiat, anything goes, as long as it works. Of course, physicists do prefer fine tools polished to perfection, and complain when they aren’t, but will use them, anyway. And invent and build new crude ones when there is nothing available off the shelf.
What I was highlighting is the effect size.
As a mathematician, it’s possible to get an impression of the type “physicists’ reasoning isn’t rigorous, because they don’t use, e.g. epsilon delta proofs of theorems involving limits, and the reasoning is like a sloppier version of mathematical reasoning.”
The real situation is closer to “physicists dream up highly nontrivial things that are true, that virtually no mathematicians would have been able to come up with without knowledge of physics, and that mathematicians don’t understand sufficiently well to be able to prove even after dozens of years of reflection.”
But mathematicians also frequently dream up highly nontrivial things that are true, that mathematicians (and physicists) don’t understand sufficiently well to be able to prove even after dozens of years of reflection. The Riemann hypothesis is almost three times as old as quantum field theory. There are also the Langlands conjectures, Hodge conjecture, etc., etc. So it’s not clear that something fundamentally different is going on here.
I agree that the sort of reasoning that physicists use sometimes shows up in math.
I don’t think that the Riemann hypothesis counts as an example: as you know, its truth is suggested by surface heuristic considerations, so there’s a sense in which it’s clear why it should be true.
I think that the Langlands program is an example: it constitutes a synthesis of many known number theoretic phenomena that collectively hinted at some general structure: they can be thought of “many weak arguments” for the general conjectures
But the work of Langlands, Shimura, Grothendieck and Deligne should be distinguished between the sort of work that most mathematicians do most of the time, which tends to be significantly more skewed toward deduction.
From what I’ve heard, quantum field theory allows one to accurately predict certain physical constants to 8 decimal places, with the reasons why the computations work very unclear. But I know essentially nothing about this. As I said, I can connect you with my friend for details.
Most physicists most of the time aren’t Dirac, Pauli, Yang, Mills, Feynmann, Witten, etc.
No, but my impression is that the physics culture has been more influenced by the MWA style than mathematical culture has. In particular, my impression is that most physicists understand “the big picture” (which has been figured out by using MWAs) whereas in my experience, most mathematicians are pretty focused on individual research problems.
As a tangent, I think it’s relatively clear both how physicists tend to think differently from mathematicians, and how they came up with path-integration-like techniques in QFT. In both math and physics, researchers will come up with an idea based on intuition, and then verify the idea appropriately. In math the correct notion of verification is proof; in physics it’s experimentation (with proof an acceptable second). This method of verification has a cognitive feedback loop to how the researcher’s intuition works. In particular physicists have intuition that’s based on physical intuition and (generally) a thoroughly imprecise understanding of math, so that from this perspective, using integral-like techniques without any established mathematical underpinnings is intuitively completely plausible. Mathematicians would shirk away from this almost immediately as their intuition would hit the brick wall of “no theoretical foundation”.
If you’re really curious, you can talk with my chauffer (who has deep knowledge on this point).