Mathematical reasoning as such (and how exactly humans perform it) is extremely fascinating, as is the article.
I offer a tentative explanation of why people who are slow to pick mathematics up at first later go on to dominate it: the search algorithm (if you’ll tolerate a loose metaphor) their cognitive software is running is breadth-first. When they first begin to learn mathematics their neurons are assaulted with a slew of possible interpretations—assigning a clear semantics to the notation through a haze of conflicting ideas is difficult. In fact, it can be intellectually paralyzing. Repeatedly investigating faulty interpretations due to assigning a slightly wrong semantics will leave you intellectually exhausted and seemingly no closer to a solution.
Mathematics may be easier on first introduction if you completely ignore the semantics of your notation, and reason strictly within it. People who are capable of doing this would seem to be quickly mastering the subject, while what they’re really doing is rigid symbol-shifting rather than getting beneath the notation. If you ask such a person to reason outside the notation, they’ll founder.
An attendant explanation is that slow-learning mathematicians synthesize their symbol-manipulation procedures from the ground up. Simply following instructions produces intellectual discomfort, so they have to understand small things totally before they can proceed to justify using those small things in more complex ways. They’re driven by their own instinctive desire for rigour to learn the hard (but ultimately more thorough and edifying) way.
I’m not a mathematician, but this was definitely what it felt like when I first attempted to learn computer programming, and what it felt like when I started taking mathematics seriously in school.
Mathematical reasoning as such (and how exactly humans perform it) is extremely fascinating, as is the article. I offer a tentative explanation of why people who are slow to pick mathematics up at first later go on to dominate it: the search algorithm (if you’ll tolerate a loose metaphor) their cognitive software is running is breadth-first. When they first begin to learn mathematics their neurons are assaulted with a slew of possible interpretations—assigning a clear semantics to the notation through a haze of conflicting ideas is difficult. In fact, it can be intellectually paralyzing. Repeatedly investigating faulty interpretations due to assigning a slightly wrong semantics will leave you intellectually exhausted and seemingly no closer to a solution.
Mathematics may be easier on first introduction if you completely ignore the semantics of your notation, and reason strictly within it. People who are capable of doing this would seem to be quickly mastering the subject, while what they’re really doing is rigid symbol-shifting rather than getting beneath the notation. If you ask such a person to reason outside the notation, they’ll founder.
An attendant explanation is that slow-learning mathematicians synthesize their symbol-manipulation procedures from the ground up. Simply following instructions produces intellectual discomfort, so they have to understand small things totally before they can proceed to justify using those small things in more complex ways. They’re driven by their own instinctive desire for rigour to learn the hard (but ultimately more thorough and edifying) way.
I’m not a mathematician, but this was definitely what it felt like when I first attempted to learn computer programming, and what it felt like when I started taking mathematics seriously in school.
I just wanted to mention that I think that these are great points. I hope to respond substantively later, though I don’t know when I’ll be able to.
This is a really good post, thank you.