If you had a lot of very smart coders working on a centuries old operating system, and never once running it, every function of which takes 1 hour to 1 day to understand, each coder is put under a lot of pressure to write useful functions, not so much to show that others’ functions are flawed, and you pointed out that we don’t see many important functions being shown to be wrong, I wouldn’t even expect the code to compile, nevermind run even after all the syntax errors are fixed!
The lack of important results being shown to be wrong is evidence, and even more & interesting evidence is (I’ve heard) when important results are shown to be wrong, there’s often a simple fix. I’m still skeptical though, because it just seems like such an impossible task!
People metaphorically run parts of the code themselves all the time! Its quite common for people to work through proofs of major theorems themselves. As a grad student it is expected you will make an effort to understand the derivation of as much of the foundational results in your sub-field as you can. A large part of the rationale is pedagogical but it is also good practice. It is definitely considered moderately distasteful to cite results you dont understand and good mathematicians do try to minimize it. Its rare that an important theorem has a proof that is unusually hard to check out yourself.
Also a few people like Terrance Tao have personally gone through a LOT of results and written up explanations. Terry Tao doesn’t seem to report that he looks into X field and finds fatal errors.
As a grad student it is expected you will make an effort to understand the derivation of as much of the foundational results in your sub-field as you can […] It is definitely considered moderately distasteful to cite results you dont understand and good mathematicians do try to minimize it.
Yeah, that seems like a feature of math that violates assumption 2 argument 1. If people are actually constantly checking each others’ work, and never citing anything they don’t understand, that leaves me much more optimistic.
This seems like a rarity. I wonder how this culture developed.
One way that the analogy with code doesn’t carry over is that in math, you often can’t even being to use a theorem if you don’t know a lot of detail about what the objects in the theorem mean, and often knowing what they mean is pretty close to knowing why the theorem’s you’re building on are true. Being handed a theorem is less like being handed an API and more like being handed a sentence in a foreign language. I can’t begin to make use of the information content in the sentence until I learn what every symbol means and how the grammar works, and at that point I could have written the sentence myself.
If you had a lot of very smart coders working on a centuries old operating system, and never once running it, every function of which takes 1 hour to 1 day to understand, each coder is put under a lot of pressure to write useful functions, not so much to show that others’ functions are flawed, and you pointed out that we don’t see many important functions being shown to be wrong, I wouldn’t even expect the code to compile, nevermind run even after all the syntax errors are fixed!
The lack of important results being shown to be wrong is evidence, and even more & interesting evidence is (I’ve heard) when important results are shown to be wrong, there’s often a simple fix. I’m still skeptical though, because it just seems like such an impossible task!
People metaphorically run parts of the code themselves all the time! Its quite common for people to work through proofs of major theorems themselves. As a grad student it is expected you will make an effort to understand the derivation of as much of the foundational results in your sub-field as you can. A large part of the rationale is pedagogical but it is also good practice. It is definitely considered moderately distasteful to cite results you dont understand and good mathematicians do try to minimize it. Its rare that an important theorem has a proof that is unusually hard to check out yourself.
Also a few people like Terrance Tao have personally gone through a LOT of results and written up explanations. Terry Tao doesn’t seem to report that he looks into X field and finds fatal errors.
Yeah, that seems like a feature of math that violates
assumption 2argument 1. If people are actually constantly checking each others’ work, and never citing anything they don’t understand, that leaves me much more optimistic.This seems like a rarity. I wonder how this culture developed.
One way that the analogy with code doesn’t carry over is that in math, you often can’t even being to use a theorem if you don’t know a lot of detail about what the objects in the theorem mean, and often knowing what they mean is pretty close to knowing why the theorem’s you’re building on are true. Being handed a theorem is less like being handed an API and more like being handed a sentence in a foreign language. I can’t begin to make use of the information content in the sentence until I learn what every symbol means and how the grammar works, and at that point I could have written the sentence myself.