(Take with a grain of salt, I’m far from IMO level and never seriously trained for math contests.)
After solving any given problem, reflect on general methods that would allow solving a bigger class of problems including the one you’ve cracked. For any miracle of intuitively seeing a solution method (or noticing some useful property), look for ways of more systematically inferring a workable method that don’t rely on miracles. Don’t consider a problem solved just because you solved it (i.e. used your intuition), you should also figure out how it could be solved (i.e. know in more detail how your intuition figured it out, or know a method other than the unknown one used by your intuition).
I expect this can get one past some limitations of raw ability that wouldn’t otherwise be lifted using just problem-solving practice, but I’m not sure how far.
This is a reasonable portion of what I did for math olympiads; the other parts were doing lots and lots of problems and acquiring a solid technical background.
One thing I worked on in particular is formulating good solution strategies, where I could see the general steps of a solution (or the steps of a good approach) without having to actually fill in all the details of the approach; this involves having good heuristics for figuring out what is true/false and what can be proved without too much effort (and then deferring the actual proof until later when all the pieces have come into place).
FTR I was a USAMO winner (top 12) in high school but didn’t make it to the IMO team. I did make the IOI team though. I’m currently coaching at the US IOI training camp for the next week, so I don’t have much spare time right now, but maybe when I have some I’ll share in a bit more detail the things I did (just a warning, this was before I started optimizing my behavior so while it apparently worked it was probably not an optimal trajectory).
I expect this can get one past some limitations of raw ability that wouldn’t otherwise be lifted using just problem-solving practice, but I’m not sure how far.
A lot farther than most people realize. Few people actually try going meta, because social structures don’t encourage it. (At least not in a near sense.)
Not going meta for developing reliability of problem-solving took a lot of points from me. I just relied on the magical intuition, which was good enough to solve some hard problems (to figure out solution method, without knowing how it was being figured out), but not good enough to reliably solve those problems without errors.
As a result, when I was applying to college, I was afraid of the regular admission exams which I couldn’t reliably ace (because of technical errors I wouldn’t notice, even though solution methods were obvious), and instead used the perfect score given to winners of Moscow math and physics olympiads, which required solving some hard problems but not solving all problems without errors. Which is a pretty stupid predicament. It just never occurred to me that production of perfect scores can be seen as an engineering problem, and I don’t recall any high school teachers mentioning that (even smart college professor teachers administering cram school sessions).
I feel like the advice in your earlier comment is good for obtaining insight, but I can’t see how it would be useful on a test. I haven’t taken many tests where I have had enough time to solve each problem in several ways!
I’m eager to learn more if I haven’t understood correctly, though.
For harder tests, the benefit is in not ignoring low-hanging fruit, and training to look for any opportunity to get better reliability, performing cheap checks and selecting more reliable of any alternative sub-steps. On the other hand, ordinary exams are often such that a well-prepared applicant can solve all problems in half the time or less, and then the failure would be not taking advantage of the remaining time to turn “probably about 90% of solutions are correct” into “95% chance the score is perfect”.
(Take with a grain of salt, I’m far from IMO level and never seriously trained for math contests.)
After solving any given problem, reflect on general methods that would allow solving a bigger class of problems including the one you’ve cracked. For any miracle of intuitively seeing a solution method (or noticing some useful property), look for ways of more systematically inferring a workable method that don’t rely on miracles. Don’t consider a problem solved just because you solved it (i.e. used your intuition), you should also figure out how it could be solved (i.e. know in more detail how your intuition figured it out, or know a method other than the unknown one used by your intuition).
I expect this can get one past some limitations of raw ability that wouldn’t otherwise be lifted using just problem-solving practice, but I’m not sure how far.
This is a reasonable portion of what I did for math olympiads; the other parts were doing lots and lots of problems and acquiring a solid technical background.
One thing I worked on in particular is formulating good solution strategies, where I could see the general steps of a solution (or the steps of a good approach) without having to actually fill in all the details of the approach; this involves having good heuristics for figuring out what is true/false and what can be proved without too much effort (and then deferring the actual proof until later when all the pieces have come into place).
FTR I was a USAMO winner (top 12) in high school but didn’t make it to the IMO team. I did make the IOI team though. I’m currently coaching at the US IOI training camp for the next week, so I don’t have much spare time right now, but maybe when I have some I’ll share in a bit more detail the things I did (just a warning, this was before I started optimizing my behavior so while it apparently worked it was probably not an optimal trajectory).
(I expect top programming contests are harder/less feasible to train for at levels outside raw ability. It all happens too fast.)
A lot farther than most people realize. Few people actually try going meta, because social structures don’t encourage it. (At least not in a near sense.)
Not going meta for developing reliability of problem-solving took a lot of points from me. I just relied on the magical intuition, which was good enough to solve some hard problems (to figure out solution method, without knowing how it was being figured out), but not good enough to reliably solve those problems without errors.
As a result, when I was applying to college, I was afraid of the regular admission exams which I couldn’t reliably ace (because of technical errors I wouldn’t notice, even though solution methods were obvious), and instead used the perfect score given to winners of Moscow math and physics olympiads, which required solving some hard problems but not solving all problems without errors. Which is a pretty stupid predicament. It just never occurred to me that production of perfect scores can be seen as an engineering problem, and I don’t recall any high school teachers mentioning that (even smart college professor teachers administering cram school sessions).
What do you mean by “going meta”?
I feel like the advice in your earlier comment is good for obtaining insight, but I can’t see how it would be useful on a test. I haven’t taken many tests where I have had enough time to solve each problem in several ways!
I’m eager to learn more if I haven’t understood correctly, though.
For harder tests, the benefit is in not ignoring low-hanging fruit, and training to look for any opportunity to get better reliability, performing cheap checks and selecting more reliable of any alternative sub-steps. On the other hand, ordinary exams are often such that a well-prepared applicant can solve all problems in half the time or less, and then the failure would be not taking advantage of the remaining time to turn “probably about 90% of solutions are correct” into “95% chance the score is perfect”.
Gotcha. That’s much more clear to me—thanks.