The Truth About Mathematical Ability
There’s widespread confusion about the nature of mathematical ability, for a variety of reasons:
Most people don’t know what math is.
Most people don’t know enough statistics to analyze the question properly.
Most mathematicians are not very metacognitive.
Very few people have more than a casual interest in the subject.
If the nature of mathematical ability were exclusively an object of intellectual interest, this would be relatively inconsequential. For example, many people are confused about Einstein’s theory of relativity, but this doesn’t have much of an impact on their lives. But in practice, people’s misconceptions about the nature of mathematical ability seriously interfere with their own ability to learn and do math, something that hurts them both professionally and emotionally.
I have a long standing interest in the subject, and I’ve found myself in the unusual position of being an expert. My experiences include:
Completing a PhD in pure math at University of Illinois.
Four years of teaching math at the high school and college levels (precalculus, calculus, multivariable calculus and linear algebra)
Personal encounters with some of the best mathematicians in the world, and a study of great mathematicians’ biographies.
A long history of working with mathematically gifted children: as a counselor at MathPath for three summers, through one-on-one tutoring, and as an instructor at Art of Problem Solving.
Studying the literature on IQ and papers from the Study of Exceptional Talent as a part of my work for Cognito Mentoring.
Training as a full-stack web developer at App Academy.
Doing a large scale data science project where I applied statistics and machine learning to make new discoveries in social psychology.
I’ve thought about writing about the nature of mathematical ability for a long time, but there was a missing element: I myself had never done genuinely original and high quality mathematical research. After completing much of my data science project, I realized that this had changed. The experience sharpened my understanding of the issues.
This is a the first of a sequence of posts where I try to clarify the situation. My main point in this post is:
There are several different dimensions to mathematical ability. Common measures rarely assess all of these dimensions, and can paint a very incomplete picture of what somebody is capable of.
What is up with Grothendieck?
I was saddened to learn of the death of Alexander Grothendieck several months ago. He’s the mathematician who I identify with the most on a personal level, and I had hoped to have the chance to meet him. I hesitated as I wrote the last sentence, because some readers who are mathematicians will roll their eyes as they read this, owing to the connotation (even if very slight) that the quality of my research might overlap with his. The material below makes it clear why:
“His technical superiority was crushing,” Thom wrote. “His seminar attracted the whole of Parisian mathematics, whereas I had nothing new to offer. — Rene Thom, 1958 Fields Medalist
“When I was in in Paris as a student, I would go to Grothendieck’s seminar at IHES… I enoyed the atmosphere around him very much … we did not care much about priority because Grothendieck had the ideas that we were working on and priority would have meant nothing. — Pierre Deligne, 1978 Fields Medalist
“[The IHES] is a remarkable place.. I knew about it before I came there; it was a legendary place because of Grothendieck. He was kind of a god in mathematics.” — Mikhail Gromov, 2010 Abel Prize Winner
“On arriving at the IHES, we ordinary mathematicians share the same feeling that Muslims experience on a pilgrimage to Mecca. Here is the place were, for a dozen or so years, Grothendieck relentlessly explained the holy word to his apostles. Of that saga, only the apocrypha reached us in the form of big, yellow, boring-looking books edited by Springer. These dozens of volumes...are still our most precious working companion.” — Ngo Bau Chau, 2010 Fields Medalist
Based on these remarks alone, it seems hard to imagine how I could be anything like Grothendeick. But when I read Grothendieck’s own description of himself, it’s hauntingly familiar. He writes:
“I’ve had the chance...to meet quite a number of people, both among my “elders” and among young people in my general age group, who were much more brilliant, much more “gifted” than I was. I admired the facility with which they picked up, as if at play, new ideas, juggling them as if familiar with them from the cradle—while for myself I felt clumsy. even oafish, wandering painfully up a arduous track, like a dumb ox faced with an amorphous mountain of things that I had to learn ( so I was assured), things I felt incapable of understanding the essentials or following through to the end. Indeed, there was little about me that identified the kind of bright student who wins at prestigious competitions or assimilates, almost by sleight of hand, the most forbidding subjects.”
When I mentioned this to professor at a top math department who had taken a class with Grothendieck, he scoffed and said that he didn’t believe it, apparently thinking that Grothendieck was putting on airs in the above quotation – engaging in a sort of bragging, along the lines of “I’m so awesome that even though I’m not smart I was still one of the greatest mathematicians ever.” It is hard to reconcile Grothendieck’s self-description with how his colleagues describe him. But I was stunned by the professor’s willingness to dismiss the remarks of somebody so great out of hand.
In fairness to the professor, I myself am much better situated to understand how Grothendieck’s remarks could be sincere and faithful than most mathematicians are, because of my own unusual situation.
What is up with me?
I went to Lowell High School in San Francisco, an academic magnet school with ~650 students per year, who averaged ~630 on the math SAT (81st percentile relative to all college bound students). The math department was very stringent with respect to allowing students to take AP calculus, apparently out of a self-interested wish to keep their average AP scores as high as possible. So despite the strength of the school’s students, Lowell only allowed 10% of students to take AP Calculus BC. I was one of them. The teachers made the exams unusually difficult for an AP Calculus BC course, so that students would be greatly over prepared for the AP exam . The result was that a large majority of students got 5′s on the AP exam. By the end of the year, I had the 2nd highest cumulative average out of all students enrolled in AP Calculus BC. It would have been the highest if the average had determined exclusively by tests, rather than homework that I didn’t do because I already knew how to do everything.
From this, people understandably inferred that I’m unusually brilliant, and thought of me as one of the select few who was a natural mathematician, having ability perhaps present in only 1 in 1000 people. When I pointed out that things had not always been this way, and that I had in fact failed geometry my freshman year and had to retake the course, their reactions tended to be along the lines of Qiaochu’s response to my post How my math skills improved dramatically:
I find this post slightly disingenuous. My experience has been that mathematics is heavily g-loaded: it’s just not feasible to progress beyond a certain point if you don’t have the working memory or information processing capacity or whatever g factor actually is to do so. The main conclusion I draw from the fact that you eventually completed a Ph.D. is that you always had the g for math; given that, what’s mysterious isn’t how you eventually performed well but why you started out performing poorly.
It’s not at all mysterious to me why I started out performing poorly. In fact, if Qiaochu had known only a little bit more, he would be less incredulous.
Aside from taking AP Calculus BC during my senior year, I also took the SAT, and scored 720 on the math section (96th percentile relative to the pool of college bound students). While there are many people who would be happy with this score, there were perhaps ~60 students at my high school who scored higher than me (including many of my classmates who were in awe of me). Just looking at my math SAT score, people would think very unlikely that I would come close to being the strongest calculus student in my year.
As far removed my mathematical ability is from Grothendieck’s, we have at least one thing in common: our respective performances on some commonly used measures of mathematical ability are much lower than what most people would expect based on our mathematical accomplishments.
Hopefully these examples suffice to make clear that whatever mathematical ability is, it’s not “what the math SAT measures.” What the math SAT measures is highly relevant, but still not the most relevant thing.
What does the math SAT measure?
Just for fun, let’s first look at what the College Board has to say on the subject. According to The Official SAT Study Guide
The SAT does not test logic abilities or IQ. It tests your skills in reading, writing and mathematics – the same subjects you’re learning in school. [...] If you take rigorous challenging courses in high school, you’ll be ready for the test.
Some of you may be shocked by the College Board’s disingenuousness without any further comment. How would they respond to my own situation? Most hypothetical responses are absurd: They could say “Unfortunately, you were underprivileged in having to go to the high school ranked 50th in the country, where you didn’t have access to sufficiently rigorous challenging courses” or “While you did take AP Calculus BC, you didn’t take AP US History, and that would have further developed your mathematical reasoning skills” or “Our tests are really badly calibrated – we haven’t been able to get them to the point where somebody with 99.9 percentile level subject matter knowledge reliably scores at the 97th percentile or higher.”
Their strongest response would be to say that the test has been revised since I took it in 2002 to make it more closely aligned with the academic curriculum. This is true. But a careful examination of the current version of the test makes it clear that it’s still not designed to test what’s learned in school. For example, consider questions 16-18 in Section 2 of the sample test:
The grid above represents equally spaced streets in a town that has no one-way streets. F marks the corner where a firehouse is located. Points W, X, Y, and Z represent the locations of some other buildings. The fire company defines a building’s m-distance as the minimum number of blocks that a fire truck must travel from the firehouse to reach the building. For example, the building at X is an m-distance of 2, and the building at Y is an m-distance of 1⁄2 from the firehouse
What is the m-distance of the building at W from the firehouse?
What is the total number of different routes that a fire truck can travel the m-distance from F to Z ?
All of the buildings in the town that are an m-distance of 3 from the firehouse must lie on a...
I don’t think that rigorous, academic challenging courses build skills that enable high school students to solve these questions. They have some connection with what people learn in school – in particular, they involve numbers and distances. But the connection is very tenuous – they’re extremely far removed from being the best test of what students learn in school. They can be solved by a very smart 5th grader who hasn’t studied algebra or geometry.
The SAT Subject Tests are much more closely connected with what students (are supposed to) learn in school. And they’re not merely tests of what students have memorized: some of the questions require deep conceptual understanding and ability to apply the material in novel concepts. If the College Board wanted to make the SAT math section a test of what students are supposed to learn in school, they would do better to just swap it it with the Mathematics Level 1 SAT Subject Test.
If the SAT math section measures something other than the math skills that students are supposed to learn in school, what does it measure? The situation is exactly what the College Board explicitly disclaims it to be: the SAT is an IQ test. This accounts for the inclusion of questions like the ones above, that a very smart 5th grader with no knowledge of algebra or geometry could answer easily, and that the average high school student who has taken algebra and geometry might struggle with.
The SAT was originally designed as a test of aptitude: not knowledge or learned skills. Though I haven’t seen an authoritative source, the consensus seems to be that the original purpose of the test was to help smart students from underprivileged backgrounds have a chance to attend a high quality college – students who might not have had access to the educational resources to do well on tests of what students are supposed to learn in school. Frey and Detterman found that as of 1979, the correlations between SAT scores and IQ test scores were very high (0.7 to 0.85). The correlations have probably dropped since then, as there have in fact been changes to make the SAT less like an IQ test, but to the extent that the SAT differs from the SAT subject tests, the difference corresponds to the SAT being more of a test of IQ.
The SAT may have served its intended purpose at the time, but since then there’s been mounting evidence that the SAT has become a harmful force in society. By 2007, things had reached a point that Charles Murray wrote an article advocating that the SAT be abolished in favor of using SAT subject tests exclusively. This will have significance to those of you who know Charles Murray as the widely hated author The Bell Curve, which emphasizes the importance of IQ.
Twice exceptional gifted children
Let’s return to the question of reconciling my very strong calculus performance with my relatively low math SAT score. The difference comes in substantial part from my having a much greater love of learning than is typical of people of similar intelligence. I think that the same was true of Grothendieck.
I could have responded to Qiaochu’s suggestion that I had always had very high intelligence and that that’s why I was able to learn math well by saying “No, you’re wrong, my SAT score shows that I don’t have very high intelligence, the reason that I was able to learn math well is that I really love the subject.” But that would oversimplify things. In particular, it leaves two questions open:
A large part of why I failed geometry my freshman year of high school is that I wasn’t interested in the subject at the time. I only got interested in math after getting interested in chemistry my sophomore year. But almost nobody at my high school was interested in geometry, and almost everybody passed geometry. What made me different?
Can a love of learning really boost one’s percentile from 1 in 30 to 1 in 1000? The gap seems awfully large to be accounted for exclusively by love of learning. And what of Grothendieck, for whom the gap may have been far larger?
Partial answers to these questions come from the literature on so-called “Twice Exceptional” (2e) children. The label is used broadly, to refer to children who are intellectually gifted and also have some sort of disability.
The central finding of the IQ literature is that people who are good at one cognitive task tend to be good at any another cognitive task. For example, people who have better reaction time tend to also be better at arithmetic, better at solving logic puzzles, better able to give coherent explanations of real world concepts, and better able to recall a string of numbers that are read to them. When I was a small child, my teachers noticed that I was an exception to the rule: I had a very easy time learning some things and also found it very difficult to learn others. They referred me to a school psychologist, who found that I had exceptionally high reasoning abilities, but only average short term memory and processing speed: a 3 standard deviation difference.
There’s a sense in which my situation is actually not so unusual. The finding that people who are good at one cognitive task tend to be good at another is based on the study of people of average intelligence. It becomes less and less true as you look at people of progressively higher intelligence. Twice exceptional children are not very rare amongst intellectually gifted children. Linda Silverman writes
Gifted children may have hidden learning disabilities. Approximately one-sixth of the gifted children who come to the Center for testing have some type of learning disability—often undetected before the assessment—such as central auditory processing disorder (CAPD), difficulties with visual processing, sensory processing disorder, spatial disorientation, dyslexia, and attention deficits. Giftedness masks disabilities and disabilities depress IQ scores. Higher abstract reasoning enables children to compensate to some extent for these weaknesses, making them harder to detect.
This starts to explain why I failed geometry during my freshman year of high school. The material was boring and I wasn’t very focused on grades. But I also genuinely found it difficult to an extent that my classmates didn’t. Learning the material the way in which the course was taught required a lot of memorization – something that I was markedly worse at than my classmates at Lowell, who had been selected for having high standardized test scores.
It also explains why I didn’t score higher than 720 on the math section of the SAT. It wasn’t because I couldn’t answer questions like the ones that I pasted above. It was because some of the math SAT questions are engineered to trip up students who forget exactly what a problem asked for, or who are prone to arithmetic errors. Often a multiple choice question will have one wrong answer for every such mistake that a student might make. I used to think that this was a design flaw, and that the test makers didn’t know that they were penalizing minor mistakes very heavily. No – it wasn’t a design flaw – they designed the test that way on purpose. The questions test short-term memory as a proxy to IQ. I tried to avoid mistakes by being really systematic about my work, and not take shortcuts. But it wasn’t enough given the time constraints – making 3 minor mistakes on any combination of 54 questions is enough to reduce one’s score from 800 to 720.
It’s plausible that something similar was true of Grothendieck.
It’s probably intuitively clear even to readers who are not mathematicians that math is not about being able to avoid making 3 minor mistakes on 54 questions. It’s very helpful to be quick and accurate, and my mathematical ability is far lower than it would have been if my speed and accuracy were substantially greater, but speed and accuracy are not the essence of mathematical ability.
What is the essence of mathematical ability?
I’ve only just scratched the surface of the subject of mathematical ability in this post, largely focusing on describing what mathematical ability isn’t rather than what mathematical ability is. In subsequent posts I’ll describe mathematical ability in more detail, which will entail a discussion of what math is. I’ll also address the question of how one can improve one’s mathematical ability.
Intelligence is highly relevant and largely genetic, but there are other factors that are collectively roughly as important, some of which are things that individuals are in fact capable of developing. For now, I’ll offer a teaser, which will be obscure to readers who lack substantial additional context, and which paints a very incomplete picture even when understood deeply, but which should nevertheless serve as food for thought. Grothendieck wrote:
In our acquisition of knowledge of the Universe ( whether mathematical or otherwise) that which renovates the quest is nothing more nor less than complete innocence. It is in this state of complete innocence that we receive everything from the moment of our birth. Although so often the object of our contempt and of our private fears, it is always in us. It alone can unite humility with boldness so as to allow us to penetrate to the heart of things, or allow things to enter us and taken possession of us.
This unique power is in no way a privilege given to “exceptional talents”—persons of incredible brain power ( for example), who are better able to manipulate, with dexterity and ease, an enormous mass of data, ideas and specialized skills. Such gifts are undeniably valuable, and certainly worthy of envy from those who ( like myself) were not so endowed at birth,” far beyond the ordinary”.
Yet it is not these gifts, nor the most determined ambition combined with irresistible will-power, that enables one to surmount the “invisible yet formidable boundaries ” that encircle our universe. Only innocence can surmount them, which mere knowledge doesn’t even take into account, in those moments when we find ourselves able to listen to things, totally and intensely absorbed in child play.
Readers are welcome to speculate on what Grothendieck had in mind in writing this.
Cross-posted from my website.
- Innate Mathematical Ability by 18 Feb 2015 11:11 UTC; 75 points) (
- Is Scott Alexander bad at math? by 4 May 2015 5:11 UTC; 69 points) (
- 18 Feb 2015 22:56 UTC; 9 points) 's comment on Innate Mathematical Ability by (
- 26 Feb 2024 3:18 UTC; 4 points) 's comment on Rationality Research Report: Towards 10x OODA Looping? by (
- 20 Feb 2015 18:38 UTC; 3 points) 's comment on Innate Mathematical Ability by (
- 13 Feb 2015 15:37 UTC; 3 points) 's comment on A long comment by (
- A long comment by 12 Feb 2015 18:07 UTC; -6 points) (
I’m doing my math PhD at Harvard in the same area as Qiaochu. I was also heavily involved in artofproblemsolving and went to MathPath in 2003. I hoped since 2003 that I could stake a manifest destiny in mathematics research.
Qiaochu and I performed similarly in Olympiad competitions, had similar performances in the same undergraduate program, and were both attracted to this website. However, I get the sense that he is driven quite a bit by geometry, or is at least not actively adverse to it. Despite being a homotopy theorist, I find geometry awkward and unmotivated. I cannot form the “vivid” or “bright” images in my mind described in some other article on this website. Qiaochu is also far more social and active in online communities, such as this one and mathoverflow. I wonder about the impact of these differences on our grad school experiences.
Lately I’ve been feeling particularly incompetent mathematically, to the point that I question how much of a future I have in the subject. Therefore I quite often wonder what mathematical ability is all about, and I look forward to hearing if your perspective gels with my own.
I think it’s very important in understanding your first Grothendieck quote to remember that Grothendieck was thrown into Cartan’s seminar without requisite training. He was discouraged enough to leave for another institution.
Impostor syndrome is really, really common in mathematics. Trust me; if there’s a place in mathematics for me, then there’s a place for you too.
Seconded :).
More later, but just a brief remark – I think that one issue is that the top ~200 mathematicians are of such high intellectual caliber that they’ve plucked all of the low hanging fruit and that as a result mathematicians outside of that group have a really hard time doing research that’s both interesting and original. (The standard that I have in mind here is high, but I think that as one gains perspective one starts to see that superficially original research is often much less so than it looks.) I know many brilliant people who have only done so once over an entire career.
Outside of pure math, the situation is very different – it seems to me that there’s a lot of room for “normal” mathematically talented people to do highly original work. Note for example that the Gale-Shapley theorem was considered significant enough so that Gale and Shapley were awarded a Nobel prize in economics for it, even though it’s something that a lot of mathematicians could have figured out in a few days (!!!). I think that my speed dating project is such an example, though I haven’t been presenting it in a way that’s made it clear why.
Of course, if you’re really committed to pure math in particular, my observation isn’t so helpful, but my later posts might be.
I disagree with this. I think it is a feature that all the low hanging fruit looks picked, until you pick another one. Also I am not entirely sure if there is a divide between pure math and stuff pure mathematicians would consider “applied” (e.g. causal inference, theoretical economics, ?complexity theory? etc.) other than a cultural divide.
Maybe our difference here is semantic, or we have different standards in mind for what constitutes “fruit.” Googling you, I see that you seem to be in theoretical CS? My impression from talking with in the field people is that there is in fact a lot more low hanging fruit there.
I strongly agree with this, which is one point that I’ll be making later in my sequence.
But the cultural divide is significant, and it seems that in practice the most mathematically talented do skew heavily toward going into “pure math” so that more low hanging fruit has been plucked in the areas that mathematicians in math departments work in. I say this based on knowledge of apples-to-apples comparisons coming from people who work on math within and outside of “pure math.” For example, Razborov’s achievements in TCS have been hugely significant, but he’s also worked in combinatorics and hasn’t had similar success there. This isn’t very much evidence – it could be that the combinatorics problems that he’s worked on are really hard, or that he’s only done it casually – but it’s still evidence, and there are other examples.
Let’s say I am at an intersection of foundations of statistics and philosophy (?).
The (?) proves you right about the philosophy part.
The (?) was meant to apply to the conjunction, not the latter term alone.
And I think that anyone who makes even the slightest substantial contribution to homotopy type theory is doing interesting, original work. I think the Low-Hanging Fruit Complaint is more often a result of not knowing where there’s a hot, productive research frontier than of the universe actually lacking interesting new mathematics to uncover.
I partially respond to this here.
There’s a lot of potential for semantic differences here, and risk of talking past each other. I’ll try to be explicit. I believe that:
There are very few people who have a nontrivial probability of discovering statements about the prime numbers that are both true, that people didn’t already believe to be true, and that people find fascinating.
The same is not far from being true for all areas of math that have been mainstream for 100+ years: algebraic topology, algebraic geometry, algebraic number theory, analytic number theory, partial differential equations, Lie Groups, functional analysis, etc.
There is a lot of rich math to be discovered outside of the areas that pure mathematicians have focused on historically, and that people might find equally fascinating. In particular, I believe this to be true within the broad domain of machine learning.
There are few historical examples of mathematicians discovering interesting new fields of math without being motivated by applications.
That’s largely because machine learning is in its infancy. It is still a field largely defined by three very limited approaches:
Structural Risk Minimization (support-vector machines and other approaches that use regularization to work on high-dimensional data) -- still ultimately a kind of PAC learning, and still largely making very unstructured predictions based on very unstructured data
PAC learning—even when we allow ourselves inefficient (ie: super-poly-time) PAC learning, we’re still ultimately kept stuck by the reliance on prior knowledge to generate a hypothesis class with a known, finite VC Dimension. I’ve sometimes idly pondered trying to leverage algorithmic information theory to do something like what Hutter did, and prove a fully general counter-theorem to No Free Lunch saying that when the learner can have “more information” and “more algorithmic information” (more compute-power) than the environment, the learner can then win. (On the other hand, I tend to idly ponder a lot about AIT, since it seems to be a very underappreciated field of theoretical CS that remains underappreciated because of just how much mathematical background it requires!)
Stochastic Gradient Descent, and most especially neural networks: useful in properly general environments, but doesn’t tell the learner’s programmer much anything that makes a human kind of sense. Often overfits or finds non-global minima.
To those we are rapidly adding a fourth approach, that I think has the potential to really supplant many of the others:
Probabilistic programming: fully general, more capable of giving “sensible” outputs, capable of expressing arbitrary statistical models… but really slow, and modulo an Occam’s Razor assumption, subject to the same sort of losses in adversarial environments as any other Bayesian methods. But a lot better than what was there before.
What do you mean by people find fascinating and how many people? It seems like a lot of work in your first bullet point is the last three words.
Upvoted for being specific.
Your standards seem unusually high. I can cite several highly interesting and original work by mathematicians who would most probably not be in your, or any top ~200 list. For example,
Recursively enumerable sets of polynomials over a finite field are Diophantine by Jeroen Demeyer, Inventiones mathematicae, December 2007, Volume 170, Issue 3, pp 655-670
Maximal arcs in Desarguesian planes of odd order do not exist by S. Ball, A. Blokhuis and F. Mazzocca, Combinatorica, 17 (1997) 31--41.
The blocking number of an affine space by A. Brouwer and A. Schrijver, JCT (A), 24 (1978) 251-253.
I would like to know more about the perspective you claim to have gained which makes you think this particular way.
Yes, this is true. There are a number of reasons for this, but one is an encounter with Goro Shimura back in 2008 that left an impression on me – I thought about his words for many years.
I’ll write more tomorrow.
I am so confused as to why your standard seems to be so absurdly high to me.
Is it because my particular subfield is unusually full of low-hanging fruit? Or because so few of those ~200 top mathematicians work in it?
Is it because I don’t see how “superficially original” all of the work done in my field is? I lack perspective?
Anyway, this is really weird.
The way in which I operationalize the originality / interest of research is “in 50 years, what will the best mathematicians think about it?” I think that this perspective is unusual amongst mathematicians as a group, but not among the greatest ones. I’d be interested in how it jibs with your own.
Anyway, I think that if one adopts this perspective and takes a careful look at current research using Bayesian reasoning, one is led to the conclusion that almost all of it will be considered to be irrelevant (confidence ~80%).
When I was in grad school, I observed people proving lots of theorems in low dimensional topology that were sort of interesting to me, but it’s also my best guess that most of them will be viewed in hindsight as similar to how advanced Euclidean geometry theorems are today – along the lines of “that’s sort of pretty, but not really worthy of serious attention.”
How old are you?
When I started grad school, I was blown away by how much the professors could do.
A few years out of grad school, I saw that a lot of the theorems were things that it was well known to experts that it was possible to prove by using certain techniques, and that proving them was in some sense a matter of the researchers dotting their i’s and crossing their t’s.
And in situations where something seemed strikingly original, the basic idea often turned out to be due to somebody other than the author of a paper (not to say that the author plagiarized – on the contrary, the author almost always acknowledged the source of the idea – but a lot of times people don’t read the fine print well enough to notice).
For example, the Wikipedia page on Paul Vojta reads
I had the chance to speak with Vojta and ask how he discovered these things, and he said that his advisor Barry Mazur suggested that investigate possible parallels between Nevanlinna theory and diophantine analysis.
Similarly, even though Andrew Wiles’ work on Fermat’s Last Theorem does seem to be regarded by experts as highly original, the conceptual framework that he used had been developed by Barry Mazur, and I would guess (weakly – Idon’t have an inside view – just extrapolating based on things that I’ve heard) that people with deep knowledge of the field would say that Mazur’s contribution to the solution of Fermat’s last theorem was more substantial than that of Wiles.
Eegads. How do you even imagine what those people will be like?
Sure, I don’t think anyone I know really thinks of their work that way.
29, graduating in a few months.
Yeah, sure, that’s the vast majority of everything I’ve done so far, and some fraction of the work my subfield puts out.
The people two or three levels above me, though, they’re putting out genuinely new stuff on the order of once every three to five years. Maybe not “the best mathematicians 50 years from now think this is amazing” stuff, but I think the tools will still be in use in the generation after mine. Similar to the way most of my toolbox was invented in the 70′s-80′s.
I don’t understand your concept of originality. It has to be created in a vacuum to be original?
In the counterfactual where Vojta doesn’t exist, does Mazur go on to write similar papers? Is that the problem?
Well, if, e.g. you’re working on a special case of an unsolved problem using an ad hoc method with applicability that’s clearly limited to that case, and you think that the problem will probably be solved in full generality with a more illuminating solution within the next 50 years, then you have good reason to believe that work along these lines has no lasting significance.
Not consciously, but there’s a difference between doing research that you think could contribute substantially to human knowledge and research that you know won’t. I think that a lot of mathematicians’ work falls into the latter category.
This is a long conversation, but I think that there’s a major issue of the publish or perish system (together with social pressures to be respectful to one’s colleagues) leading to doublethink, where on an explicit level, people think that their own research and the research of their colleagues is interesting, because they’re trying to make the best of the situation, but where there’s a large element of belief-in-belief, and that they don’t actually enjoy doing their work or hearing about their colleagues’ work in seminars. Even when people do enjoy their work, they often don’t know what they’re missing out on by not working on things that they find most interesting on an emotional level.
This sounds roughly similar to what I myself believe – the differences may be semantic. I think that work can be valuable even if people don’t find it amazing. I also think that there are people outside of the top 200 mathematicians who do really interesting work of lasting historical value – just that it doesn’t happen very often. (Weil said that you can tell that somebody is a really good mathematician if he or she has made two really good discoveries, and that Mordell is a counterexample.) It’s also possible that I’d consider the people who you have in mind to be in the top 200 mathematicians even if they aren’t considered to be so broadly.
It’s hard to convey effect sizes in words. The standard that I have in mind is “producing knowledge that significantly changes experts’ Bayesian priors” (whether it be about what mathematical facts are true, or which methods are useful in a given context, or what the best perspective on a given topic is). By “significantly changes” I mean something like “uncovers something that some experts would find surprising.”
I don’t have enough subject matter knowledge to know how much Vojta added beyond what Mazur suggested (it could that upon learning more I would consider his marginal contributions to be really huge). I guess in bringing up those examples I didn’t so much mean “Vojta and Wiles didn’t do original work – it had already essentially been done by Mazur” as much as “the original contributions in math are more densely concentrated in a smaller number of people than one would guess from the outside,” which in turn bears on the question of how someone should assess his or her prospects for doing genuinely original work in a given field.
I agree with your assessment of things here, but I do think it’s worth taking a moment to honor people who take correct speculation and turn it into a full proof. This is useful cognitive specialization of labor, and I don’t think it makes much sense to value originality over usefulness.
It is really hard to tell when an ad hoc method will turn out many years later to be a special case of some more broad technique. It may also be that the special case will still need to be done if some later method uses it for bootstrapping.
I’m not sure about this at all. Have you tried talking to people who aren’t already in academia about this? As far as I can tell, they think that there are a tiny number of very smart people who are mathematicians and are surprised to find out how many there are.
There are questions of quantitative effect sizes. Feel free to give some examples that you find compelling.
By “from the outside” I mean “from the outside of a field” (except to the extent that you’re able to extrapolate from your own field.)
Yes, and I’m not sure how to measure that.
Fermat’s Last Theorem. The proof assumed that p >=11 and so the ad hoc cases from the 19th century were necessary to round it out. Moreover, the attempt to extend those ad hoc methods lead to the entire branch of algebraic number theory.
Primes in arithmetic progressions: much of what Tao and Greenberg did here extended earlier methods in a deep systematic way that were previously somewhat ad hoc. In fact, one can see a large fraction of modern work that touches on sieves as taking essentially ad hoc sieve techniques and generalizing them.
I don’t recall Wiles’ proof assuming that p >= 11 – can you give a reference? I can’t find one quickly.
The n = 3 and 4 cases were proved by Euler and Fermat. It’s prima facie evident that Euler’s proof (which introduced a new number system with no historical analog) points to the existence of an entire field of math. I find this less so of Fermat’s proof as he stated it, but Fermat is also famous for the obscurity of his writings.
I don’t know the history around the n = 5 and n = 7 cases, and so don’t know whether they were important to the development of algebraic number theory, but exploring them is a natural extension of the exploration of new kinds of number systems that Euler had initiated.
They were subsumed by Kummer’s work, which I understand to have been motivated more by a desire to understand algebraic number fields and reciprocity laws than by Fermat’s last theorem in particular. For this, he developed the theory of ideal numbers, which is very general.
Ben Green, not Greenberg :-).
Sure, but the ultimate significance of the work remains to be seen. Of course, tastes vary, and there’s an element of subjectivity, but I think that we can agree that even if there’s a case for the proof being something that people will find interesting in 50 years, that the prior in favor of it is much weaker than the prior in favor of this being the case of, e.g. the Gross-Zagier formula.
I think this is in the original paper that modularity implies FLT, but I’m on vacation and don’t have a copy available to check. Does this suffice as a reference?
Yes, thank you.
Sure, but Kummer was aware of the literature before him, and almost certainly used their results to guide him.
Agreement may there depend very strongly on how you unpack “much weaker” but I’d be inclined to agree at least weaker without the much.
How good do you consider past mathematician to have been to judge further interest 50 years down the line. Do you think that 50 years ago mathematicians understood the significance of all findings made at the time that turned out to be significant?
How do you make a priori judgments on who the best mathematicians are going to be? In your opinion, what qualities/achievements would put someone in the group of best mathematicians?
How different would your deductions be if you were living in a different time period? How much does that depend on the areas in mathematics that you are considering in that reasoning?
I’m not sure what gives you this impression. In my own field (number theory) I don’t get that feeling at all. There may not be much low hanging fruit, but there’s more than enough for people who aren’t that top 200 to do very useful work.
I’d certainly defer to you in relation to subject matter knowledge (my knowledge of number theory really only extends through 1965 or so), but this is not the sense that I’ve gotten from speaking with the best number theorists.
When I met Shimura, he was extremely dismissive of contemporary number theory research, to a degree that seemed absurd to me (e.g. he characterized papers in the Annals of Mathematics as “very mediocre.”) I would ordinarily be hesitant to write about a private conversation publicly, but he freely and eagerly expresses his views freely to everyone who he meets. Have you read The Map of My Life? He’s very harsh and cranky and perhaps even paranoid, but that doesn’t undercut his track record of being an extremely fertile mathematician. I reflected on his comments and learned more over the years (after meeting with him in 2008) his position came to seem progressively more sound (to my great surprise!).
A careful reading of Langlands’ Reflexions on receiving the Shaw Prize hints that he thinks that the methods that Taylor and collaborators have been using to prove theorems such as the Sato-Tate conjecture won’t have lasting value, though he’s very guarded in how he expresses himself. I remember coming across a more recent essay where he was more explicit and forceful, but I forget where it is (somewhere on his website, sorry, I realize that this isn’t so useful). It’s not clear to me that Taylor would disagree – he may explicitly be more committed to solving problems in the near term than by creating work of lasting value.
One can speculate these views are driven by arrogance, but they’re not even that exotic outside of the set of people who have unambiguously done great work. For example, the author of the Galois Representations blog, who you probably know of, wrote in response to Jordan Ellenberg:
apparently implicitly characterizing his own work as insignificant. And there aren’t very many number theorists as capable as him.
Replying separately so it isn’t missed. I wonder also how much of these issues is the two cultures problem that Gowers talks about. The top people conception seems to at least lean heavily into the theory-builder side.
I agree with you, and there are strong problem solver types who conceptualize mathematical value in a different way from the people who I’ve quoted and from myself.
Still, there are some situations where one has apples-to-apples comparisons.
There’s a large body of work giving unconditional proofs of theorems that would follow from the Riemann hypothesis and its generalizations. Many problem solvers would agree that a proof of the Riemann hypothesis and its generalizations would be more valuable than all of this work combined.
We don’t yet know how or when the Riemann hypothesis will be proved. But suppose that it turns out that Alain’s Connes’ approach using noncommutative geometry (which seems most promising right now, though I don’t know how promising) turns out to be possible to implement over the next 40 years or so. In this hypothetical. What attitude do you think that problem solvers would take to the prior unconditional proofs of consequences of RH?
Yes, but at the same time (against my earlier point) the best problem solvers are finding novel techniques that can be then applied to a variety of different problems- that’s essentially what Gowers seems to be focusing on.
I’m not sure I’d agree with that, but I feel like I’m in the middle of the two camps so maybe I’m not relevant? All those other results tell us what to believe well before we actually have a proof of RH. So that’s at least got to count for something. It may be true that a proof of GRH would be that much more useful, but GRH is a much broader idea. Note also that part of the point of proving things under RH is to then try and prove the same statements with weaker or no assumptions, and that’s a successful process.
I’m not sure. Can you expand on what you think would be their attitude?
I’ll volunteer another reason not to necessarily pay attention to my viewpoint: I’m pretty clearly one of those weaker mathematicians, so I have obvious motivations for seeing all of that side work as relevant.
I suspect that one can get similar viewpoints from people who more or less think the opposite but that they aren’t very vocal because it is closer to being a default viewpoint, but my evidence for this is very weak. It is also worth noting that when one does read papers by the top named people, they often cite papers from people who clearly aren’t in that top, using little constructions or generalizing bits or the like.
I’ll note that I think that there are people other than top researchers who have contributed enormously to the mathematical community through things other than research. For example, John Baez is listed amongst the mathematicians who influenced MathOverflow participants the most, in the same range as Fields medalists and historical greats, based on his expository contributions.
Yes, this is true and a good point. It can serve as a starting point for estimating effect sizes.
I’m not qualified to say judge the accuracy of these claims, but I was speaking with a PhD in physics who said that he thought that only ~50 people in theoretical physics were doing anything important.
You should note that even great mathematicians sometimes question their abilities. Michael Atiyah:
Mathematical reasoning as such (and how exactly humans perform it) is extremely fascinating, as is the article. I offer a tentative explanation of why people who are slow to pick mathematics up at first later go on to dominate it: the search algorithm (if you’ll tolerate a loose metaphor) their cognitive software is running is breadth-first. When they first begin to learn mathematics their neurons are assaulted with a slew of possible interpretations—assigning a clear semantics to the notation through a haze of conflicting ideas is difficult. In fact, it can be intellectually paralyzing. Repeatedly investigating faulty interpretations due to assigning a slightly wrong semantics will leave you intellectually exhausted and seemingly no closer to a solution.
Mathematics may be easier on first introduction if you completely ignore the semantics of your notation, and reason strictly within it. People who are capable of doing this would seem to be quickly mastering the subject, while what they’re really doing is rigid symbol-shifting rather than getting beneath the notation. If you ask such a person to reason outside the notation, they’ll founder.
An attendant explanation is that slow-learning mathematicians synthesize their symbol-manipulation procedures from the ground up. Simply following instructions produces intellectual discomfort, so they have to understand small things totally before they can proceed to justify using those small things in more complex ways. They’re driven by their own instinctive desire for rigour to learn the hard (but ultimately more thorough and edifying) way.
I’m not a mathematician, but this was definitely what it felt like when I first attempted to learn computer programming, and what it felt like when I started taking mathematics seriously in school.
I just wanted to mention that I think that these are great points. I hope to respond substantively later, though I don’t know when I’ll be able to.
This is a really good post, thank you.
As you say, getting really, really high scores on a test like the SAT Math requires you to be good at not screwing up. The ability to get 200 out of 200 “easy” questions right (when the median score is something like 190 out of 200) and the ability to get at least 10 out of 20 “really hard” problems correct (when the median score is something like 3 out of 20) are totally different things.
When I took the SAT I, I got 800 Verbal and 750 Math. My raw score showed that I had six questions wrong on both sections.
Yeah, but the two things correlate anyway, and in practice the most mathematically talented people usually don’t make any mistakes at all on 200 easy questions.
As a single data point: I have been able to reliably score an 800 on the SAT Math section since I was 14 or so (I’m 16 right now). Seeing as the SAT has shifted from the 1600-scale to the 2400-scale, it’s possible there are some differences, but looking at the problem example provided in the article, they feel more or less identical to me, so I don’t think it’s beyond making comparisons. I don’t find SAT math very difficult at all, and I don’t make careless mistakes on the problems either, which does suggest to me that it’s easier to avoid careless mistakes on problems you find easy. On the other hand, I find the last 5 or so questions on the AIME (American Invitational Mathematical Examination) absolutely hellish in terms of difficulty, and I’ve noticed a significant proportion of careless errors in my work when doing those problems.
I’m not exactly sure why this is; intuitively I’d expect there to be little to no correlation between the difficulty of the problem itself and the difficulty of avoiding careless mistakes when doing those problems, but clearly this is not so (at least in my admittedly extremely limited experience). The best explanation I have right now (which is almost certainly a just-so story, but whatever) is that humans have a limited amount of “concentration ability”, and devoting more concentration to doing the problem results in less focus on avoiding careless errors, or vice versa.
I find myself bristling at this article, but I think it might be for bravery debate reasons. That is, I think I have a well-calibrated sense of what mathematical ability looks like, how we can measure it, and so on, and this article seems to be targeted at people who are miscalibrated in one particular way.
An example:
Really? Which people would think that? The Math SAT is so simple that I studied for it the last time I took it because it had been too many years since I had originally learned the material. For highly numerate people, the Math SAT is mostly an error-counting competition; I was particularly lucky that the year that I took it, one mistake would only knock you down from 800 to 780. The verbal is more suited to the actual range of students, where you can miss several questions and still get an 800, because there aren’t a large number of perfect scores running around. (The range of the Math SAT is too small, basically.)
And so I think if you told someone familiar with math education “hey, here’s a calculus class of 65 students drawn from a population of 650 students at a magnet school, and here are their math SAT scores. What’s your posterior on any student being the top student in that class?”, you would find that they didn’t adjust their (presumably uniform) prior all that much on learning the SAT scores. And if you had told them “well, the ‘top student’ isn’t the actual top student on intelligence and conscientiousness, but just on intelligence” they might have actually preferred the lower SAT scores (given that the lowest Math SAT score in your class presumably cleared 650, probably even 700), as they’re evidence for lower conscientiousness.
I guess there are two things to say here.
One is that it is in fact the case that a significant fraction of people less knowledgable than you would accord significantly greater significance to the math SAT than you would.
The other is that my observation has been that the most mathematically talented people who I know have usually have scored 800 on the math SAT … You seem to be claiming that a ceiling effect) makes the test a bad measurement instrument, which certainly is true to some extent, and which is a priori plausible, but you may have been less lucky than you think.
Specifically, a bad measurement instrument at differentiating very high levels of mathematical ability. It works as well as you would expect when the measurement error doesn’t hit the ceiling or floor.
I should be clearer about my ‘luck’ claim: what a raw score of “all but one right, one wrong” gets you depends on the percentage of students who got “all right” that year, which depends on that year’s test difficulty. Some years it’s 760, some years it’s 780, and so on. (If I remember correctly, I got both of those processed scores from taking it two times and getting the same raw score.) I do not think the underlying raw score of “all but one right, one wrong” is due to luck (in the sense of my underlying skill creates a family of rate parameters for Poisson distributions that are summed together to get a total error count, and while any sample from that distribution is stochastic the distribution is very narrow).
See my comment here. I agree to some extent, but the correlation between cognitive ability and math SAT scores is positive for all levels of cognitive ability and SAT math scores, including the highest ones (even if it becomes substantially smaller).
Added: To operationalize the situation, I would guess that the frequency with which mathematicians who have won famous prizes (Abel Prize, Fields Medal, etc.) would miss no questions at all (say, as 18 year olds) would be noticeably higher than the corresponding frequency for professors at top 50 math departments. I’ll give evidence in subsequent posts.
I agree that I expect it would be higher, though I would describe my expectation as “modest,” which probably overlaps with “noticeable.”
As a former (3-time) MathPath student, I have the feeling I’ve seen you before. I must admit that it’s only a feeling.
As far as Grothendieck goes, I think he is simply channeling Buddhism’s concept of beginner’s mind. Nothing new, really. Most quotes are null-content “yes I’m a human” type things. The main problem I have with your post is that none of it is math-specific; take out the “math” repetition, the few mentions of calculus etc., and it’s simply a generic description of ability.
I don’t have the experience that people who are serious about beginner’s mind speak of how other people in their age group are much more brilliant, much more “gifted”.
I was there in 2004, 2005 and 2006.
I think that there’s some overlap, but that’s not all of it. The quotation is taken from a much larger document that he wrote. Maybe it’s not realistic to expect people to be able to guess with so little context. Anyway, more will be forthcoming.
I’ve said very little about what mathematical ability is in this post, it’s the first of a sequence and I’ll write more about things specific to mathematical in my upcoming posts. However, I’m also not sure what you mean. A large fraction of the post is about the math SAT and cognitive abilities that are more relevant to math than they are to most other activities.
I was there 2005-2007.
I just rewrote your post to be about me / Steve Jobs instead of you / Grothendieck, see http://lesswrong.com/r/discussion/lw/lpp/a_long_comment/. Maybe you can understand what I mean about the post “mostly not being about math”.
Right, so we did overlap, and probably interacted at least a little bit.
Your rewriting feels strained to me, but regardless, the issue is of little consequence – like I said, I’ll be getting into the particulars of math more soon.
He did that on purpose, didn’t he?
Maybe his translator, but not him.
I’m extremely grateful for this post, and look forward to the rest of the sequence.
For me this is also of great personal relevance—I too am among the “twice exceptional” (*), and am chagrined that this concept, as the Wikipedia article says, “has only recently entered educators’ lexicon”. You won’t be surprised to know that (as I think we’ve even discussed before privately) Grothendieck’s description of himself—and his mathematical style, insofar as I understand it—is also something that I identify with very strongly.
(*) illustrative anecdote: in 9th grade, I received a “D” in geometry during the same term that I won a state competition in that subject.
Interesting. I also found this article extremely personally relevant. While not as big a contrast as your case, in 8th grade I got a D- in geometry and honors on the California Golden State Exam for geometry—that last much to the amazement of my teacher, who was forced to present the award to me at assembly.
I also identify with the duality from the article of being an average thinker in the moment, but having very strong reasoning skills. It seems half the smart people I know are sharper than I am, but people regard my reasoning and verbal debate skills to be very good. I’ve never quite known how to reconcile this with how g seems to work.
Geometry award notwithstanding, I’ve never been good at math, though I was always put in the gifted math classes. Everyone knows kids who didn’t study and still aced the exams—some here are those kids. Whether I studied or didn’t study, I would always get Cs and Ds. I never had enough time to finish the tests. When I took physics, the professor had a rule that you could take as long as you liked on exams; this let me get the highest grade in the class on every one.
I would love to be better at math, because it’s important, but it’s not intrinsically interesting to me. Today I’m a software developer and I learn whatever math I need, but what interests me is tools and efficiency through design and I prefer to work at the functional layer where the math I learned in high school and college isn’t the longest lever.
Recently, due to articles on Less Wrong and such, I’ve come to realize that there are math subjects I probably do have an interest in, but they weren’t the math foundations we grow up with, at least in the US. Continuous math is boring to me, but discrete math—starting with probabilities—I can find lots of programming and everyday uses for, so much so that I’m considering going back and finishing a probability and statistics degree.
Looking forward to the rest of this series.
A word of warning: probability and statistics make heavy use of continuous math. Here Be Integrals.
Unless you take Edward Nelson’s approach.
Sorry, I haven’t heard of him. Could you explain and/or link?
He worked on both ultrafinitism and nonstandard analysis, bringing new approaches to both. He was discussed around here a couple of years ago for his claim to have proved the existence of a contradiction in Peano Arithmetic, but Terry Tao found the flaw. Unfortunately, he died last year.
In his nonstandard days, he wrote Radically Elementary Probability Theory. This is not simply redoing standard continuous probability theory with nonstandard analysis, but doing discrete probability theory with nonstandard integers. All of the useful things appear, but it all looks very different. Princeton still hosts the PDF: https://web.math.princeton.edu/~nelson/books/rept.pdf
ETA: While ultrafinitism and nonstandard analysis seem almost complete opposites as far as constructivism goes, his approaches to them are actually quite similar to one another.
Third alternative: Perhaps Grothendieck felt slow to understand puzzles because he had unrealistic expectations of himself. Perhaps he felt his thinking was sluggish because he had higher standards set for himself. Perhaps he felt the problems were extremely difficult because he was working with problems others barely recognized the existence of.
Knowing very little about mathematics and nothing about Grothendieck, this possibility is what I find most plausible.
The phenomenon that you allude to (great researchers setting a high standard for themselves and feeling inadequate relative to it even while doing extremely good work by most people’s standards) is a real one, but in the above passage Grothendieck is in part (explicitly) comparing himself with other people who he knows.
A version of your comment that takes this into account is that he may have been comparing his speed with that of the greatest mathematicians in the world (e.g. his close correspondent Jean-Pierre Serre, who I believe to be the youngest Fields medalist in history).
That’s a fair reply and I see value in it, but I also suspect Grothendieck was comparing himself to specialists while he pursued an unusually broad understanding of in-depth mathematics.
Math PhD student here. It seems to me that mathematical ability is a nebulous concept. I’ve noticed in courses I taught that grades tend to reward conscientious students who can “play the game” and do formal manipulations even if they don’t really understand what’s going on. Courses tend to move fast enough that very few students can keep up with all the concepts, so the ones who have trouble playing the game and don’t keep up with all the concepts have trouble.
Personally, I had little patience for that. I seldom memorized formulas. Either I knew them from repeated use or I just derived them on an exam. I always felt odd when I had to apply a technique or formula I didn’t understand. I would say that love of learning seems to have played a significant role. When I truly want to learn something and take pleasure in doing so, I’ll devour the subject. I have trouble making progress in topics I feel forced to learn.
Yes, I’ll be discussing the points that you raise (in the context of other things) in my subsequent posts. Thanks for your interest.
Calculus 2 is where I hit the limits of my conceptual abilities. I am very bad at “playing the game” in this way, so I haven’t moved beyond that yet.
I think it’s wrong to put too much emphasis on a contrast between “playing the game” and “understanding the material”, though. My feeling is that if I became better at playing games, paying attention to detail, being more conscientious about my work, then I would also improve my conceptual understanding after a while.
Indeed, the mathematical profession itself relies on this for the training of its members, because it doesn’t know how to train conceptual understanding directly—as described candidly by Ravi Vakil:
I seem to be unusual (among people attracted to advanced mathematics, but perhaps not so much in the LW cluster) in being mostly unable to tolerate such an approach.
My inability to deal with this approach is a good part of why I switched away from number theory after about three semesters of graduate school (I got my PhD in another area of math). The expectation that students would learn the advanced material via “fake it till you make it” was endlessly frustrating to me and actively bad for my learning and mental health.
To be sure, there’s some of this in most areas of math, but my admittedly limited impression is that the situation is worse in number theory and algebraic geometry than in some other fields.
This is a really good quote, thank you.
“Young man, in mathematics you don’t understand things, you just get used to them!”—John von Neumann
In synthetic approaches to mathematical subjects, it’s not necessarily meaningful to ask what a mathematical object “is”, or “what’s going on”. It’s not about things being less than rigorous—rather, all that matters is the axioms and rules of inference you get to use in that particular area. ISTM that extending “tendrils of knowledge” can be modeled as making such ‘synthetic’ inferences, whereas backfilling involves finding different models of the same theories, to make conceptual understanding more feasible.
I’ll let you in on a secret: almost everyone hits the limit in Calculus 2. For that matter, most people hit the limit in Calculus 1 so you were ahead of the curve. That doesn’t mean no one understands calculus, or that you can’t learn it. It just means most students need more than one pass through the material. For instance, I don’t think I really understood integration until I learned numerical analysis and the trapezoidal rule in grad school.
There’s a common saying among mathematicians: “No understands Calculus until they teach it.”
Well, yes.
I didn’t understand a lot of math I aced until much later.
This may just be that you don’t really understand any area of math well until you’ve taught it.
I’ve known professors who decided that the best way to learn a new topic was to teach a class in it.
… This basically explains my entire life and really makes me feel a lot better about the whole thing.
Also, I feel that the tendency towards Mathematical Platonism has poisoned professional math and math pedagogy by making people view mathematical ability as a kind of divinatory superpower rather than as a lawful cognitive activity—but at some point I might as well just write an entire article ranting and railing on behalf of constructivism rather than elaborate here.
I’m very interested in this series. I was good, although not exceptional, at math up until high school, where I did fine in geometry, have completely lost all memory of the other math class I took while at that school, and then moved and had two terrible-fit math teachers in a row and barely passed stats to collect my math requirement in college. Things haven’t improved since; I’ve been known to flee the room from excess math. I expect that with enough work I could get it back, at least in sub-areas of math without the properties that set me off and with a supernaturally accommodating teacher, but I’m not sure it’s worth it; still, I’d like to know more about what exactly I lost. (If you could write this series without too much recourse to numerically dense examples or at least make them skippable, by the way, I’d appreciate it.)
Fascinating stuff. Here are some largely unrelated hasty generalizations based on my limited experience in the world of software development.
I’l classify the software development work I’ve done as falling in to three broad categories: frontend, backend, and data analysis:
Frontend software development is the most forgiving of mistakes & imperfect code. There’s little opportunity to do permanent damage to production data. It’s typically easy to test your code by clicking around a bunch, so thinking through the problem carefully to ensure correctness by design is often overkill. Also, requirements are more subject to change and code gets thrown away quicker, so code written to be correct by design has less time to accrue benefits over the long term.
Backend software development tends to be less tolerant of mistakes.
Data analysis is arguably deserving of the highest quality code, because it can fail silently. Illustration: At the last company I worked at, I was the maintainer of the company’s email system, including marketing emails. I once found a subtle bug in code written by one of the company’s data scientists that caused it to double or triple count purchases generated as a result of marketing emails. Sometimes you can write test cases for data analysis code, but it was very difficult in the environment we were using, and it can be hard to capture all of the corner cases. However, software developers are perennially overconfident in their ability to produce bug-free code (the more experienced you get as a backend developer, the more paranoid you become), and I wouldn’t be surprised if the industry as a whole has yet to catch on to this problem of analytics code silently giving bad numbers. (Also, much data analysis programming is surprisingly unsophisticated mathematically… it’s mostly sums and averages.)
I’m highly intrigued by Steve Yegge’s liberal/conservative dichotomy of software development cultures. Under this paradigm, liberal cultures (more common at smaller/startup companies or those working on less critical applications) are more tolerant of errors, while conservative cultures are less tolerant.
If anyone is interested, I recently scribbled down some tips on writing reliable software for a friend working in finance who is learning to code to automate aspects of his job. (Code review is something I didn’t mention that works well if you’re working in a group.)
By the way, does anyone have any idea why, as Jonah says, there is so little metacognition among mathematicians? I found the same thing to a surprising degree in software development (by thinking about my thought process while coding, I was frequently able to generate lines of inquiry I hadn’t read or heard about from anyone).
I think there is quite a bit of metacognition, especially in the context of teaching mathematics to others. One issue is there is some evidence that successful mathematicians are quite heterogeneous in how they think (the classic analyst vs algebraist dichotomy).
Possibly because metacognition isn’t much like mathematics, so there’s no reason to expect mathematicians to be especially interested in it.
This isn’t a complete answer, but I think a significant part of it is that metacognition is not incentivized professionally for researchers who are not at the top of their fields. Few pure mathematicians are capable of doing original work that requires a lot of metacognition. Still, there are question around why mathematicians aren’t metacognitive anyway (professional incentives are not the only thing that drive people’s behavior). I don’t have a great answer to this, but my subsequent articles will shed some light on the situation.
Because people have a straightforwardly magical/mystical conception of how Doing Math works, much as they do with other forms of cognition.
By liberal cultures being more tolerant of errors, do you mean that they design with error robustness in mind, or they don’t assign as much blame for them?
Second one.
Being good at math requires an intuition for mathematical computation. Most of this is occurring sub-consciously where there is no limit on conscious processing as there is with IQ. In general, IQ is more suitable to forming relationships in small systems, but the pieces by which those relationships are drawn, is rooted in what is empirical. IQ correlates less with true math ability than developed intuition and pure experience.
I also have a high variance in my intellectual abilities . I got a perfect score on the math section of the GRE, but received a C+ in my high school geometry class despite putting a massive amount of effort into it. A big challenge I have repeatedly faced is convincing people that my inability to accomplish certain things isn’t due to laziness.
I’ve had the exact same problem, and it’s made life difficult for me in many ways. One point that people miss is that if your abilities are very uneven, your comparative advantage on certain tasks can be so much lower than your comparative advantage on other tasks that it’s actually rational not to do certain things that are “required.” In college I took a music theory course that turned out to be centered around memorizing the different key signatures and the scales and chords in each of them. It makes a lot more sense for someone who can do these things easily to put effort in the class than it does for somebody who finds it a serious struggle while being very good at other things.
For me, conscientiousness is the main problem. The high school I attend places a very high emphasis on doing homework when calculating grades, and the grading system my calculus teacher has in place goes something like this:
He assigns us nightly homework.
The next day, he checks to see if we completed the homework. Just that. There is no weight placed on whether or not you’ve actually done the problems correctly; either you have the homework done, in which case you receive 10⁄10, or else you don’t, in which case you receive 0⁄10. Someone who completed half the problems, then, and did all of them correctly would receive a 5⁄10, whereas someone who complete all of them but got them all wrong would receive a 10⁄10.
This means that I have more or less taken to filling in random answers on the homework problems whenever I don’t feel like doing excess work, which for me (someone who has ADHD and as such is great at procrastination) is pretty much every night. I get ~100% on the tests we take, but the percent grade that homework provides is far greater than that provided by the tests, so what ends up happening is I get a B or so in the class whereas someone who doesn’t really understand the material manages to coast through by simply completing the work and gets an A in the end.
The problem is, I feel, that the way the system is set up now doesn’t really incentivize a true understanding of the material. Rather, it encourages rote memorization and task completion, which is a great way to stifle creativity, which I am given to understand is rather important later on in mathematics. (When I am feeling particularly uncharitable, I am prone to characterizing it as “drone behavior”.) Unfortunately, this means that someone who doesn’t think that way and doesn’t want to think that way is punished by the grading system, and—what’s worse—the teachers think that I’m just being lazy when really many of my classmates are (in my opinion) being a lot more lazy by not actually bothering to learn the material.
If I understand correctly, there are various forms of dyscalculia which can selectively impact performance in arithmetic or geometry, and dyscalculia in general occurs over a wide range of IQ.
This suggests that skills in specific math fields may be related to specific neural circuits in the brain.
How would one go about getting this type of thing measured for oneself?
One can try to use online IQ tests, though they haven’t been normed rigorously. The GIQ Test is perhaps the best I’ve found.
You can also use SAT scores as a proxy to IQ scores to some extent (as I indicate in the article).
Finally, on the point of working memory in particular, you can test your digit span and compare it with a table of percentiles.
International audience: I don’t know what the average math SAT results happen to be to judge how impressive 630 is.
Thanks, I added relevant information within the article. This was just an oversight – I wasn’t making a conscious choice not to cater to an international audience.
Good. I expected that you would appreciate ways like that to improve your article.
I didn’t want to imply that it was a conscious decision. Stuffing a lot of meaning into few words is hard ;)
Out of 800. About 80-85ish percentile?
That still doesn’t tell me what the average person gets.
How many characters is the google complexity of your question (defined as “lowest length of query, in characters, that gives you an answer on the first page”)? An upper bound is 8 characters.
I can Google the answer, but I don’t think I’m the only person who doesn’t know what the average happens to be. I think it’s worthwhile that an article like this provides a reference for such a score.
Reading on (http://www.collegeboard.com/prod_downloads/highered/ra/sat/SATPercentileRanks.pdf) that Jonah’s score of 720 puts him into the 96 percentile of the overall population makes a sentence like
more interesting. There no good reason not to provide that information in the article.
I agree with your larger point.
The fact that any dimension reduction strategy from [super large and complicated space that is your brain] to [positive integer within a range] is badly calibrated should be news to precisely no one.
I don’t think I agree with “badly calibrated,” there. It’s not like everyone saw g coming, or it doesn’t see any opposition; it’s surprising and interesting how much of the variance in human intelligence can be captured by a single dimension. Is it 100%? Of course not, and I say that because we calculate g loadings and correlations and have a good idea of just how much information g does or does not give us.
Did you take the SAT subject test in math? How did you do?
800/800/790/780 on math IIC, physics, chemistry and writing (I do much better on tests of subject matter knowledge).
It’s worth noting that the SAT2 (subject tests) are much more rarely taken; while all students who anticipate tertiary schooling in the US take the SAT, only a relative handful take the SAT2 (or did when I was looking at it). My 740 in math (SAT1) was substantially higher percentile than my 790 in the SAT2 math subject test
I also thought that the College Board’s claim that the SAT 1 is not an IQ test was really odd. The test is (or was, in 2004/2005) full of the following categories of problem: 1) Things a reasonably competent high school math student could solve, if they took the time, but the answers were widely spaced (say, by order of magnitude) and you could figure out the only possible one of those near-instantly if you knew to filter for it instead of for the exact solution. 2) Problems like the example given above, where no education past second grade (generously) is needed to actually solve the problem; it’s just a matter of how quickly you can figure out what it’s asking and how to best determine that. 3) Problems that could be solved, relatively easily but slowly, the “long” way, or that could be solved quickly if you knew the trick. 4) Problems that look complicated, but where getting the answers only requires solving a much simpler subset of the full problem.
None of the three actually test your knowledge of math as a subject, really. #3 is the only type you might reasonably expect to have learned the optimal technique from a teacher, unless your teacher was specifically preparing you for the SAT. A common trend of all of them is that time is of the essence; while some people might genuinely be unable to solve a few of the problems, most of them would just take too long if done the simplest or most straightforward way, and you’d run out of time. The trick was to figure out how to solve each problem quickly, whether because the problem fit a pattern with a quickly-solvable solution, or because the solution required novel thought rather than merely plugging values into a formula, or because the test basically gives you the answer if you know how to spot it.
The subject tests (I did math, physics, and writing) were much more like a classroom test, where actual knowledge of the subject was being tested. You still had to be quick, but the questions were much less likely to be “you can solve this in 10 seconds or 3 minutes, depending on what you do” and more likely to be “you hopefully know the formula for this, so solve it as fast as you can!”
As other people have said, I look forward to reading the rest of this series. I’m surprised this isn’t a promoted post...
can you say what you got on the critical reading part. a lot of people consider that part a better indicator of general intelligence than the math one since it has a higher ceiling and is harder to improve on. maybe you missed a vocab one or two but were you good at the passage based ones? like really good and you rarely missed one? people that made those sections look easy always stood out as really smart to me and like they could be good at whatever they applied themselves to.
I have no insight to offer here but I would just like to say a very very interesting post.
I had no idea this was the case. Again, nor am I in the Grothendieck category, but I am very uneven in abilities too. I thought I was an exception in that regard.
People generally aren’t!
Thanks!
Yvain / Scott Alexander is another example. See section 2 of his post The Parable Of Talents. I agree with most of what he says and find his post is quite insightful. But I think that his assessment of his mathematical ability is probably wrong, even though him struggling to get a C- in calculus probably reflects some sort of innate difference between his classmates. In fact, observing this was one of the proximate causes for me writing on the nature of mathematical ability.
Thanks for writing this post, and specifically for trying to change Scott’s mind. Scott’s complaints about his math abilities often go like this:
“Man, I wish I wasn’t so terrible at math. Now if you will excuse me, I am going to tear the statistical methodology in this paper to pieces.”
Put me in as yet another “clearly not in the genius category” person in a somewhat mathy area awaiting the rest of this series. I think a lot about what “mathematical sophistication” is, I am curious what your conclusions are.
I think mathematical sophistication gets you a lot of what is called “rationality skills” here for free, basically.
Scott’s technique for shredding papers’ conclusions seem to me to consist mostly of finding alternative stories that account for the data and that the authors have overlooked or downplayed. That’s not really a math thing, and it plays right to his strengths.
Causal stories in particular.
I actually disagree that having a good intuitive grasp of “stories” of this type is not a math thing, or a part of the descriptive statistics magisterium (unless you think graphical models are descriptive statistics). “Oh but maybe there is confounder X” quickly becomes a maze of twisty passages where it is easy to get lost.
“Math things” is thinking carefully.
I think equating lots of derivation mistakes or whatever with poor math ability is: (a) toxic and (b) wrong. I think the innate ability/genius model of successful mathematicians is (a) toxic and (b) wrong. I further think that a better model for a successful mathematician is someone who is past a certain innate ability threshold who has the drive to keep going and the morale to not give up. To reiterate, I believe for most folks who post here the dominating term is drive and morale, not ability (of course drive and morale are also partly hereditary).
I have the sort of math skills that Scott claims to lack. I lack his skill at writing, and I stand in awe (and envy) at how far Scott’s variety of intelligence takes him down the path of rationality. I currently believe that the sort of reasoning he does (which does require careful thinking) does not cluster with mathy things in intelligence-space.
Look at his latest post: “hey wait a second, there is bias by censoring!” The “hard/conceptual part” is structuring the problem in the right way to notice something is wrong, the “bookkeeping” part is e.g. Kaplan-Meier / censoring-adjustment-via-truncation.
I don’t disagree with this. A lot of the kind of math Scott lacks is just rather complicated bookkeeping.
(Apropos of nothing, the work “bookkeeping” has the unusual property of containing three consecutive sets of doubled letters: oo,kk,ee.)
Obviously, everyone will do best if they work as hard as they can, and innate talent models can prevent people from working harder. But innate talent models are really useful at avoiding wasted effort and frustration, and so I think it’s easier to make the case for the toxicity and wrongness of the inverse of the models you’re describing.
In particular, I think Scott’s easy response is to say “what happens when you put drive and morale into the ability bucket along with calculation ability?” (You bring up that possibility a bit with the hereditary section.) Given that Scott doesn’t seem to have the drive or the morale, then you either need to claim to him that he’s misdiagnosed his drive to do mathematics, which seems toxic and wrong,* or to say “yep, not a mathematician.”
Also, I do work both as a research mathematician and as a industrial mathematician, and while you might be right about research mathematics, I think that equating a lot of derivation mistakes with poor industrial math ability is healthy and right. I don’t want to hand off an analysis problem to someone who makes derivation mistakes, because the results can’t be trusted, and I wouldn’t want my accountant to be bad at calculation, and so on.
*Scott’s made the comparison to sexual orientation before, and I think it fits; just like a gay guy would get fed up with people saying “you just haven’t met the right girl yet!”, someone who does not have the drive or talent for mathematics will get fed up with people saying “you just haven’t met the right theorem yet!”
I think this is how this conversation is playing out:
“I think there are wizards and muggles (from birth). Wizards can do magic because they are born with magic-sensitivity, muggles cannot because they are not born magic-sensitive.”
“I think there is some amount of magic sensitivity you need, but a lot of it is just general psych factors which are partly hereditary but which you can partly control. In particular, miscasting spells sometimes isn’t evidence you are a bad wizard. WIzards can be good at lots of other types of magic stuff.”
“Ok maybe for certain types of wizards that’s true, but you know, in industrial settings your wand-work needs to be bulletproof. Also, you don’t want to give muggles false hope lest they waste a lot of time at Hogwarts.”
It seems to me you just like the muggle/wizard worldview, and I don’t. I view the “raising the sanity waterline” project as related to the “raising average math proficiency” project. I think on net, people should be less scared of math, not more, and shouldn’t be afraid to dip their toes in this or that even if they are not going to do cutting edge work, necessarily. Math is infinite and eventually everyone gets confused and lost, even the best among us.
The big difference between the wizard/muggle model and the drive/morale model is that with the latter you can decide your tradeoffs regarding efforts in a magic-wardly direction (magic is pretty useful!). You might not want to spend 10 years in a wizarding school, but you may take the time to familiarize yourself with classes of spells, be able to read magical literature of some types, etc. With the former you just write yourself off as a muggle.
I feel like “just like” is about half right; I think it’s much more plausible than alternatives, and I get that you don’t think it’s much more plausible than alternatives. But I also think that whether or not we should promote a view publicly depends a lot on the impacts of that view. In particular:
I think the main empirical difference between the two models is the increase in ability we expect to see from a marginal increase in expended effort. (I’ll call this ‘elasticity.’) The wizard/muggle model thinks that elasticity varies heavily from person to person, and that some people have very high elasticities and other people have very low elasticities. The drive/morale model thinks that people have comparable elasticities, and the input is the main thing that matters.
Let’s move from Scott to an example I feel a bit more strongly about: I have artist friends who sometimes have difficulty adding single-digit numbers. I wish they were less scared of math, and I suspect a large part of their difficulty with arithmetic is trauma from subpar math education being inflicted on them before they were ready for it—but my goal is to get them up to the point where they are comfortable with the sorts of numbers they deal with in their professional work, which is basically just arithmetic. Maybe they could go further, with heroic effort, but that doesn’t seem worth trying.
I agree with Scott that you are making the exact wrong prediction about the impacts of the wizard/muggle model vs. the drive/morale model on motivation and self-worth (and then eventual ability). I see the wizard/muggle model as allowing radical self-acceptance: it is okay for my friends to have difficulty adding single-digit numbers because that’s where their competence level is, and it is okay for it to be a major project for them to erase that difficulty, because that’s where their elasticity is. And if they move at their pace with their needs taken care of, then they might be willing to dip their toes in more and more, and regain the sense of play that makes learning fun. How can I go to them with the same drive/morale model that they’ve been cudgeled with for decades and say “but if you just tried harder, you too could be where I was when I was five!”, when it is neither kind nor true nor conducive to play or seeking?
I agree that a set of folks just isn’t great at math. However, among the LW-posting crowd in particular (the set I am concerned with), while there are indeed some folks who aren’t great at math, I suspect more often than not there are morale (partly due to shitty math edu) and akrasia (partly due to untrained human’s issues maintaining a sustained line of effort in any direction) issues instead.
I think there are a lot of toxic default views about how mathematical activity is done floating around. For example, even otherwise very smart folks feel frustrated if they cannot solve something in a day. But doing non-trivial math, in my experience, takes months, possibly years, of work. If getting a non-trivial result out takes that long, you need to be able to: (a) not write yourself off as dimwitted and slow, even in the face of there being people who might be able to deal with what you are working on faster, perhaps much faster, and (b) maintain the motivation to keep banging on the problem.
These are non-trivial “psych skills” that I think are needed for doing serious math for people who aren’t Terry Tao. Note: I am not implying that Terry Tao magically spits out results in a day, there is every indication that he works very very hard (it is just that he is so smart he moves up to the frontier of what is possible for him). I just think that the existence of people like Tao doesn’t imply other folks cannot make meaningful contributions, they just need to learn to work in an environment that has Taos of the world running around.
Leaving “being a mathematician” aside, I think if you really aren’t good at math, “rationality” probably isn’t for you, it all sits on math in the end. If you really cannot grasp certain basics, you cannot have more than a religious engagement with rationality. Note: I don’t mean you must be able to do novel work in mathematics, I mean you have serious issues thinking mathematically at all.
Imagine really trying to explain to your artist friends about correlation and causation, or about why Bayes theorem implies a positive test result for a nasty disease probably means a false positive if the disease is rare enough, or (God forbid..) the Monty Hall problem, etc.
More of this sort of engagement (engagement on the margin for folks who might be capable of a bit more math thinking but for akrasia/morale issues) is what I feel the goal of mathematician outreach should be.
I actually think we don’t disagree on anything important (e.g. goals, or easy to verify statements), just on hard empirical matters. Which is a good place for a disagreement to be!
I agree with most of your diagnosis in this section—but I still disagree with your prescription. When I was working through my graduate electrodynamics course that used Jackson, I found that, empirically, it took about 10 hours for me to complete a set of homework problems, and each day I could do about 2 hours of work before I ran out of motivation/energy (which would only be replaced by sleep and time). Basically every other class I’ve taken I could solve the homework problems in one session (i.e. I don’t recall systematically running out of energy midway). So I said “alright, if I want to get this weekly homework done on time, I need to work on it for five days and only take two off,” and that’s what I did.
(Incidentally, this is how I know I’m not as clever at solving physics problems as Stephen Hawking—as I recall it, his classmates told the story of how they were sitting down in the common room to work Jackson problems for the first time, and not being able to do them after a few hours of focused effort was a rude shock, and then Hawking came down the stairs from his room. They asked how he was doing on them, and he said “well, I got the first six, but the seventh one is giving me a bit of trouble.”)
It seems to me that viewing my ability to do Jackson problems as a measurable quantity that I can discover and use in planning is helpful. It also seems that trying to nudge the numbers involved (could I solve all of them in only eight hours? How about working 2.5 hours a day, rather than just 2?) is way less useful that arranging other things around those numbers (my old homework schedule won’t work; let’s design a new one that will, and if I can’t find a homework schedule that works, maybe I shouldn’t expect to pass electrodynamics).
It also seems to me that being able to measure my ability and put it as a number on the real line (or vector in R^n) helps break out of discrete categories—rather than just “good at physics” and “bad at physics,” I can observe that I’m about a 10 on the Jackson scale, and a 10 is way more useful and precise than “good” or “bad,” especially if I’m prone to falling into the trap of using myself as the boundary between “good” and “bad.” Among other things, I can schedule my week around a 10; I can’t schedule my week around a “not as good as Stephen Hawking.”
So, some of this is doable with other modules in the brain; the Wason Selection Task is the famous example. A lot of the instrumental rationality things, like WOOP or self-reflection about emotions, seem possible to teach and useful. Monty Hall seems really hard—not to get it right if they’ve got pencil and paper and know how to turn the crank, but to look at the problem and say “this looks easy but is in fact a hard problem! I should get out my pencil and paper.”
I’m not clear if we’re disagreeing on whether or not (1) the elasticity model is accurate, (2) it’s useful to believe it, or (3) it’s useful to promote it. (Obviously, those aren’t exclusive options.) I’m interpreting eli_sennesh as claiming that even if the elasticity model is true, believing in it is demoralizing and thus will reduce one’s elasticity, and I’m interpreting the first paragraph of this comment of yours in the same way, but I’m also interpreting your comment here as mostly agreeing with (1).
I think how math elasticity is distributed in the general population is an open empirical question. It could be most people will get poor returns on effort wrt math, or it could be we are very bad at teaching math, and making math non-scary to attempt.
Regardless of what the distribution of elasticity is, if you are interested in rationality, you need to be able to push yourself a bit on certain math topics if you want real engagement. I don’t think there is a way around it. So, e.g. I disagree w/ Cyan above where he claims Scott is not being mathy when he’s engaging with rationality. I think Scott absolutely is being mathematic, it just does not look like it because there are no outward signs of “wizarding stuff being done” e.g. scary notation.
[ Random fantasy aside, if you read Scott Bakker, compare to how his Gnostic magic works, there is the audible part, the utteral, and the inaudible part, the inutteral. The inutteral is hard to explain, it is the correct habit of thought to make the magic work. Without the inutteral the spell always fails. My view of math is like this: thinking “in the right way” is the inutteral, the notation/formalization, etc. is the utteral. ]
I worry that all this talk about “top 200 mathematicians” is thinly disguised status talk.
For example, it was taken as given that technique is what’s important (this is common in pure math, and also in theoretical CS). But often conceptual insight is important. Or actually doing the technical proof using a mostly understood approach. There is a lot of heterogeneity in (a) how mathematicians think and (b) what various mathematicians are good at (see e.g. hedgehogs vs foxes). Thinking hard about what sort of contribution is really the most important just feels like status anxiety to me. Let a thousand flowers bloom, I say!
I agree that this is mostly status games, and “importance” is not a useful measure relative to, say, marginal value. (That is, I assume most people think of “importance” in terms of, say, “average value,” but the average value of a thing does not tell you whether your current level should increase, decrease, or stay the same.) Find the best niche for you in the market / ecosystem, and make profits/contributions, and only worry about the rest to the extent that it helps you find a better niche.
Re: “value” I am not sure how to think about mathematics in consequentialist terms (and I am not a huge fan of consequentialism in general). The worry is that we should all stop doing math and start working on online ads or something.
I agree with the niche comment as a practical hack given our inability to predict the future.
Actually, I think both personal capacity/elasticity and morale/drive are more like real numbers than like booleans, and that since they’re both factors in your actually doing/learning math, there seems to me (just based on observation) to be a very large range of values for the two variables where you can leverage one to make up for a lack of the other in order to get more math done.
I also think that the elasticity/ability variable does not dictate a hard limit on how much math you can learn, but instead on how quickly you learn it.
To pin myself down to concrete predictions, I don’t think, for instance, that most people are incapable of learning, say, beginning multivariable calculus (without any serious analysis, the engineers’ version). I do think that many people have so little innate ability that they cannot learn it quickly enough to pass a math, science, or engineering degree in four years. I just think they can probably learn the material if they repeat certain courses twice over and wind up taking seven years. What we normally consider being Bad At Math simply means being so slow to learn math that it would take decades to learn what more talented students can digest in mere years.
Neither innate ability nor drive, in my view, draws a hard “line in the sand” that dictates an impassable limit. They simply dictate where the “price curve” of pedagogical resource trade-offs will fall; we can always educate more people further if we have the resources to invest and can expect a positive return.
(To further pin myself to concrete experience, I have a dyslexic friend from undergrad who faced exactly this trouble with programming. Because the university and his family knew he was dyslexic, he was allowed to take five years to finish his undergraduate degree in Computer Science, and he used quite a lot of personal grit and drive to ensure he studied enough to pass in those five years. If I recall, he came out with just about a 3.10/4.00 GPA, or somewhere thereabouts—not excellent but respectable. Today he works as a software engineer for Cisco and earns a healthy salary, because the university and his parents decided the extra resources were worth allowing/investing to let him learn the fundamentals at the pace his studying efforts could carry him.)
I must protest: absolutely nothing happens. Real limitations on your drive and morale don’t feel from the inside like “running out of hit points”, they feel like akrasia. So as long as you can choose, from the inside, to keep going, you haven’t run out of drive yet.
I am reluctant to generalize to the internal experience of others, especially if the difference between my and someone else’s internal experience are the cause of those externally observable drive and morale differences.
I feel like this is contentless. Suppose you and I are observing a third person, and they do not keep going. Can either of us state whether or not they could choose to keep going from the inside?
Now, if you are struggling, and say to yourself “this sucks, but I will keep going,” and you do keep going for longer having said that to yourself, then great! As you point out, that’s part of your overall drive, and so is irrelevant once we step outside how you get the drive you have and are instead quantifying how much drive you have.
Unfortunately, the key word there is outside: reasoning about some other system than one’s self encounters no Loebian obstacles. Reasoning about one’s self usually tends to involve reasoning about one’s self-model, which is actually a necessarily less accurate description of your state of being than the raw data of how things feel-from-the-inside to Objectively Be.
When you are going “UUUUGGGGGH, THIS IS FUCKING IMPOSSIBLE GODDAMNIT!” you may have hit your limit for the time being. When you are frustrated and thinking, “I probably just have limited ability at maths”, that’s just anxiety.
I’m not sure I buy this, though. If you view the ability as learning ability, or what I call elasticity over here, then it seems like I can say “I find it more difficult than I expect to learn a foreign language; I’ll downgrade my expected elasticity for foreign languages” and that might switch the EV of spending more effort on learning foreign languages from positive to negative. If there are multiple things I could do, and which thing is wisest depends on the relative elasticities, then trying to estimate those elasticities seems useful and doable without hitting the hard limit.
“Thinking carefully” is necessary but not sufficient for “math things”.
I don’t know about that—there are opportunity costs. Let’s say you’re smart, and conscientious, and have good analytical skills, etc., but not particularly good at math. Yes, you can probably make a passable mathematician if you persevere and sink a lot of time and effort into learning math. But since math is not your strong point, you probably would have made a better X (social scientist, hedge manager, biologist, etc.) with a lot less effort and frustration. Thus going for math would be a losing move.
And, of course, this.
Anecdata: I got an A in Calculus 1, a C+ in Calculus 2, and an A- in Calculus 3. Of them all, Calculus 2 seemed to be the most focused on “memorize this bunch of unjustified heuristics”, and Calculus 3 was one of the first and only times I really experienced the Wonders of Math in an actual course.
Oh, and for further anecdata, without being able to convert to letter grades, I got a 75% in Statistics 1 and failed (post-grad level) Intro to Machine Learning last year due to taking the courses without the continuous probability-theory prereq, and then retook Machine Learning this year to get an 86%.
It seems to me that a lot of variation in math grades can be very easily explained by differences in previous preparation.
As someone whose day-job largely consists of teaching Calculus 1, 2, and 3, I heartily with you about what they are like! If I could redesign the curriculum from scratch, Calculus 3 would definitely come before Calculus 2 (for the most part), and far fewer people would be required to ever take Calculus 2 at all.
ETA: I’m talking about the curriculum in most colleges in the U.S., so I hope that you are too; other countries’ curricula can vary a lot.
Calc 3 for me was Multivariate Calculus.
Actually, yeah, requiring more people to take Multivariate Calculus and fewer people to take Assorted Sequence/Series and Integration Heuristics sounds like a fine idea.
Yep, sounds like we’re talking about the same curriculum.
What’s your AoPS username? :P
Actually, you know what? I just thought of a major flaw in the way mathematics is taught.
Math is the only field in which the evidence for the truth of the statement is deliberately withheld from the learner. In no empirical science would we ever say anything like, “Experiments to confirm Newton’s Three Laws of Motion are left as an exercise to the reader.” We hold lab sessions to guide students through exactly those experiments—the experimental sciences run on a “show me!” basis.
Whereas in mathematics, we often write in pedagogical textbooks (rather than hobbyist puzzle-books) that the proof of an important theorem is “left as an exercise to the reader”, or we write proofs simply by giving an equation, then stating “It clearly follows that...” followed by another equation.
This conveys a pair of tragic, hurtful misperceptions to math students: “It is your job to rationalize the statements taught to you as true” and “Everything would be trivially apparent if you were only intelligent enough”.
There are as many opinions on issues with math education as there are mathematicians (I think the only consensus is We Are Doing It Wrong).
My view is math education needs to spend a long time at the start (e.g. before calculus, maybe even before trig) talking about what a proof is, and teaching students how to prove things. That is “here is a simple statement and a sequence of steps that form a proof. Now try to prove a similar statement.”
A lot of math education is subject-oriented (e.g. here is an analysis class, here is an algebra class, a topology class, etc.) And very few math programs, at least last I looked, really offer students fresh out of calculus a primer on proving things. They just immediately throw them in the pool and expect them to start showing things about a particular subject.
If I were Math-Emperor of Earth, I would start math education by first administering a test to figure out if a person is more visual or more algebraic, and in the former case start them on learning-what-a-proof-is via geometry (a modern Euclid take, basically), and in the latter case start them on learning-what-a-proof-is via a suitably algebraic subject, maybe even abstract algebra directly, or maybe some subject with an algebraic flavor but not too abstract (physics? number theory? I am not sure.)
Old school Russian math education used to start on geometry proofs very early (6th grade I think). Now I think they are copying Western education more, more’s the pity.
Will your subsequent posts address possible actions that can be taken to identify and repair deficiencies?
Like, say you suspect yourself to have low short-term memory span given your IQ and accomplishments. Are there ways to determine whether this self-diagnosis is true and then specifically train the weakness, as an adult?
(The short-term memory question is just an example. Maybe short-term memory can’t be trained, but surely some things can be.)
When Grothendieck was learning math, he was playing Dark Souls.
I find it interesting that no one has yet mentioned Grothendieck’s rather eccentric later behavior.
do you feel it is relevant to the article’s thesis?
Can math be useful for nature and engineering? Can you add apples and oranges? Can you add two apples? Can you describe any property of an apple using math?
Since you are a Ph.D in math, it will be good to know your views on the above questions.
‘Useful for nature’? What is that?
(Also, I am not a Ph.D in math, but I tell you this: you can add apples and oranges, if you are interested in the set consisting of both these sets. Like, how much ascorbic acid would I get from eating 2 such-and-such apples and 3 such-and-such oranges.)