We tend to forget complicated things
One consistent pattern I’ve noticed in studying math is that, if some material feels very difficult, then I might remember it in an upcoming exam, but I will almost certainly have forgotten most of it one year later. The success story behind permanent knowledge gain is almost always “this was hard once but now it’s easy, so obviously I didn’t forget it” and almost never “I successfully memorized a lot of complicated-feeling things.”
I think this also applies outside of mathematics. If it’s roughly correct, then the most obvious consequence is to adapt your behavior when you’re learning something. Provided that your goal is to improve your understanding permanently by understanding the material conceptually (which, of course, may not be the case), either study until it gets easy, or decide it’s not worth your time at all, but don’t stop when you’ve just barely understood it.
I’ve violated this rule many times, and I think it has resulted in some pretty inefficient use of time.
I think you are describing overlearning and chunking (once concepts become chunked they “feel easy”, and one reliable way to chunk ideas is to overlearn them).
Some related links:
A comment by Qiaochu Yuan
Bryan Caplan’s “Libertarianism as Moral Overlearning”
Augmenting Long-term Memory (search “chunk”)
This sounds like: “Learn until you actually get it (or don’t learn at all).”
Because if it feels difficult, it’s likely there is something missing. Either some connections between different parts (so instead of a connected network, it feels like a list of isolated facts), or you memorized some rules but you don’t actually know why it works that way (you wouldn’t be able to re-derive those rules). Or perhaps you actually get it, but you didn’t have enough practice; which is also bad, because repetition is good for memory.
I agree that this is true but I’d like to ammend “Learn until you actually get it (or don’t learn at all)” to “Decide whether your goal is to completely understand the material or use it instrumentally to achieve something else. Then choose to learn it completely or actively optimize for understanding it instrumentally.”
In the context of mathematics, the line between “deep concept I’m missing to understand this” and “one-off clever but counterintuitive step that turns out to work, possibly for a deep reason outside the field I’m working in” is pretty blurry. As a result, when you learn mathematics, you tend to have a set of knowledge composed of Deep Things You Truly Understand and a set of memorized things composed of One-Off Clever Hacks That Don’t Give Me Much Knew Insight Into The Things I’ve Learned. I’ve found that I do best when, in the process of learning the material, I try to systematically categorize the information I learn into those two categories.
An Example:
I know a lot of things (especially things related to my applied mathematics major) that I’m extremely confidant that I don’t see as isolated facts (ie, the idea of orthogonal functions, the idea of differential equations, the idea of sines and cosines, the idea of Fourier series) and that I’m extremely confident make up the set of things I need to know to understand complicated facts (ie, solving partial differential equations with Fourier expansions). I’m even relatively confident that I could re-derive how to solve partial differential equations with Fourier expansions eventually in certain cases.
However, the number of intermediary steps I would need to solve these partial differential equations is still pretty large (ie, identifying the specific boundary conditions, deciding which expansions to try with which boundary conditions, figuring out which simplifications help things cancel out, etc.) and the way I would use my knowledge to carry out these intermediate steps is still pretty challenging and non-trivial. Re-deriving them is probably do-able but it’s not something I want to be bothered with. Learning through repitition would do the trick to, but if I abandoned the repetition for a year, I’d lose the benefit.
Turns out the best solution I’ve found is seeing a math problem, thinking “hm this reminds me vaguely of something I did two years ago”, and then searching Paul’s Online Math Notes to see if they solved a problem similar enough that I can steal from them.
Yes, great correction. I’ve modified the post to state that it only applies for Deep Things You Truly Want To Understand.
I’m often confronted with the difference you’re describing but haven’t ever articulated it as you just have.
Some of the benefit to me is “knowing how” rather than “knowing what”. Maybe the hardest thing I did more than 10 years ago that’s unrelated to what I do now was write a MIDI encoder/decoder. I couldn’t write one right now—in fact, I’ve completely forgotten all the important details of the MIDI specifications. But I could write one in a couple days, way easier than the first time, because I know more or less what I did the first time, and I tautologically got lots of practice in the sub-skills that were used the most, even if I don’t remember the details.
So it really does depend on what I want to use the knowledge for.
Just wanted to say I appreciate the brevity. :) I’m guilty of overly long LessWrong posts.
I agree in general, but I think it can also have a general sense of what a particular topic is about, so if you come across a situation that calls for that topic, you’ll know to read up on that topic and remind yourself.
Yeah, your goal is not always to deeply understand the material you’re looking at, and the post only applies when it is.
Also consider that people are different and some people might have an easy time learning and reliably remembering a lots of (unconnected) facts while other people have much bigger problems with that but will have an easy time learning highly connected concepts.
I’ve taken the principle of never learning anything (or claiming to know it) unless I can deeply understand it to a very extreme place.
By “deeply understand” I mean, be able to apply it on a whole other set of inputs/problems than those that you saw it applied to. Or, if that is not doable, the more abstract version of “be able to explain it with your own words in such a way that it could reach vastly different people than those that would have understood it from the original author’s words” (which, to some extent, is based of Feyman idea).
By “never learning” I mean not starting to learn something unless I plan to reach this point and see it as feasible to reach.
Which essentially removes the “complicated” problem since using this method nothing stays complicated once you learn it almost by definition. (not, nothing stays complicated but most things still stay complex, by “complicated” I understand “hard to navigate or use”, by “complex” I understand “requiring a very large map or frequent checking of the manual”).
I think it helps create some very efficient primitives in one’s brain, at least for certain things, but I’m not sure it’s an approach I’d recommend, it puts you at odds with many things. I think wrote-memorization has it’s place and in a way it’s good to “learn” things that way, or at least necessary.
It is commonly claimed that if you make it to ‘level N’ in your mathematics education you will only remember level n − 1 or level n-2 longterm. Obviously there is no canonical way to split knowlege into levels. But one could imagine a chain like:
1) Algebra
2) Calculus (think AP calc in the USA)
3) Linear Algebra and Multi-variable Calculus
4) Basic ‘Analysis’ (roughly proofs of things in Calculus)
5) Measure Theory
6) Advanced Analysis topic X (ex Evan’s Partial Differential Equations)
This theory roughly fits my experience.
In my experience learning something complicated isn’t hard and I don’t forget it easily if I am interested in the topic and have a use for it. I venture to guess most students don’t particularly enjoy or have a use for all the complicated mathematics they learn. Thus, they forget it quickly because it never stimulated the mind.
Interest allows us to commit more memory and having a use helps us understand the topic as a tool that can be built upon and potentially used in other areas.
If learning complicated things isn’t hard, then what’s the bottleneck on learning a new field?
I should have better articulated my claim, complicated doesn’t imply difficult. Sometimes there is a unique challenge to a concept, even if you have all the right tools. To answer your question, time, interest, dedication, learning material.
A long (long, long) time ago a friend of mine said: Being smart is not about how much information you know but knowing where to get the information you need.
I tend to agree with that general statement.
I also agree that we will remember the things we actually understand much better than those cases were we just memorized a set of rules or other “facts” that have little meaning to us—except when they become those meaningless things we have to use regularly ;-)
I think this has some important aspects related to how we think about our personal optimization or efficiencies regarding knowledge and information management—what we “know” (stored in our head) and what we have ready access to and can retrieve without much search effort that is in that “off-line” memory (books, notes, computers, more generalized things like operational procedures...)
I do understand this changes the focus of the OP and I am not rejecting that view—we do remember the things that we really understand and those things just seem “easy” and tent to “just make sense” without the need to (consciously) rederive the rule(s).
But I do wonder if it is really inefficient to study something only until you have a beginning understanding even if you know you don’t have an interest or need to fully understand as long as you learned enough to know where to apply that and created a good “index” to where to quickly locate that information should you need to actually use that “knowledge” in the future.