The maths sub-Reddit had a post on maths flash cards which prompted me to write a long comment worrying about the relationship between memorisation and understanding.
One thing I’ve thought would be good to have is a program that takes math formulas and damages them, to produce plausible, similar-looking formulas but with terms missing or altered. This would be used to make a set of flash cards where you have to distinguish between real and damaged formulas.
I have written a program (in the form of a web page) which does a specialized form of this. It has a set of generators of formulas and damaged formulas, and presents you with a list containing several formulas of the same type (e.g. ∫ 2x dx = x^2 + C) but with one damaged (e.g. ∫ 2x dx = 2x^2 + C).
You have to choose the incorrect formula (on the principle that finding errors in mostly-correct reasoning is more challenging and relevant than the usual multiple-choice approach of a bunch of often-obviously wrong items), and you are scored on the logarithm of the number of choices left when you pick the right item — choosing an undamaged item does not make you fail the problem set. This is not based on any particular model of learning; it’s just what I decided would minimize the annoyance/tedium of such a quiz for me.
There are no instructions and no options; all you can do is do your best to choose only incorrect formulas. It only generates integer factoring and differentiation-or-indefinite-integration problems.
I think that you shouldn’t keep false formulas so as to not accidentally learn them. In general, this sounds like you could hit on memetically strong corruptions which could contaminate your knowledge.
I’d prefer counter-examples over damaged formulas. For example, consider the theorem that a continuous function on the interval [a,b] is bounded. The damaged formula might read “If f is continuous on (a,b), then it is bounded.” There is some merit in spotting that (a,b) doesn’t include the end point a. There is greater merit in noticing that if we leave off the end point we have f(x) = 1/(x-a) as a continuous function that is unbounded.
That leads on to the difference between what one might call syntactic memorisation and semantic memorisation. Confronted with the claim “If f is continuous on (a,b), then it is bounded.” one might know it is supposed to be [a,b] as a matter of rote memorisation, but such knowledge is only a stepping stone. One wants to press on to a deeper understanding so that even if one forgets whether it is supposed to be (a,b) or [a,b] one can quickly reconstruct the memory by running over a counter-example (a,b) and providing a proof for [a,b]
Those sorts of flash cards would be valuable, but I don’t know how to construct them in an automated way, whereas damaging formulas is comparatively easy.
I do this by hand for my programming flashcards. Take a correct example, make a couple broken versions of it, and review. Whenever I can’t distinguish between right and wrong, make a few more that hammer on the distinction. As long as there’s enough that I memorize the principle or syntactic rule or semantic concept, and not the specific examples themselves...
The maths sub-Reddit had a post on maths flash cards which prompted me to write a long comment worrying about the relationship between memorisation and understanding.
One thing I’ve thought would be good to have is a program that takes math formulas and damages them, to produce plausible, similar-looking formulas but with terms missing or altered. This would be used to make a set of flash cards where you have to distinguish between real and damaged formulas.
I have written a program (in the form of a web page) which does a specialized form of this. It has a set of generators of formulas and damaged formulas, and presents you with a list containing several formulas of the same type (e.g. ∫ 2x dx = x^2 + C) but with one damaged (e.g. ∫ 2x dx = 2x^2 + C).
You have to choose the incorrect formula (on the principle that finding errors in mostly-correct reasoning is more challenging and relevant than the usual multiple-choice approach of a bunch of often-obviously wrong items), and you are scored on the logarithm of the number of choices left when you pick the right item — choosing an undamaged item does not make you fail the problem set. This is not based on any particular model of learning; it’s just what I decided would minimize the annoyance/tedium of such a quiz for me.
It’s quite lacking in features and good architecture in its current state, as I only worked on it for a couple of days, but I just published the source code; you can also play it online in its current state.
There are no instructions and no options; all you can do is do your best to choose only incorrect formulas. It only generates integer factoring and differentiation-or-indefinite-integration problems.
I think that you shouldn’t keep false formulas so as to not accidentally learn them. In general, this sounds like you could hit on memetically strong corruptions which could contaminate your knowledge.
I’d prefer counter-examples over damaged formulas. For example, consider the theorem that a continuous function on the interval [a,b] is bounded. The damaged formula might read “If f is continuous on (a,b), then it is bounded.” There is some merit in spotting that (a,b) doesn’t include the end point a. There is greater merit in noticing that if we leave off the end point we have f(x) = 1/(x-a) as a continuous function that is unbounded.
That leads on to the difference between what one might call syntactic memorisation and semantic memorisation. Confronted with the claim “If f is continuous on (a,b), then it is bounded.” one might know it is supposed to be [a,b] as a matter of rote memorisation, but such knowledge is only a stepping stone. One wants to press on to a deeper understanding so that even if one forgets whether it is supposed to be (a,b) or [a,b] one can quickly reconstruct the memory by running over a counter-example (a,b) and providing a proof for [a,b]
Those sorts of flash cards would be valuable, but I don’t know how to construct them in an automated way, whereas damaging formulas is comparatively easy.
Should there be a quiz context which is different from the memorization context?
I do this by hand for my programming flashcards. Take a correct example, make a couple broken versions of it, and review. Whenever I can’t distinguish between right and wrong, make a few more that hammer on the distinction. As long as there’s enough that I memorize the principle or syntactic rule or semantic concept, and not the specific examples themselves...
I really like that idea.