When you are presented with a new concept, the first step is to “mechanically” learn it. At that point, you are able to solve only questions that closely match what you were taught.
The next step is to really understand the meaning of the concept, in a deeper sence. In school, this is usually achieved by providing excercises that are progressively harder and harder. Harder in this case means that the questions diverges more from the learnt material and more and more requires deeper understanding of the concept.
If you only go so far as learning the “mechanical” part of the concept, you usually will forget about it rather quickly. If you, on the other hand, really understand the meaning behind the concept, the knowledge will stick much longer. And not only that, it is okey to forget about the mechanical steps and only keep the basic understanding as you can quickly and easily look up the mechanical part when you recognize a problem.
I think this is all a normal learning process, practised everyday in school.
I have two observations, one personal and one general:
Once, I tried to apply artificial neural nets on the task to evaluate positional situations in the game of Go. I did a very basic error, which was to train the net only on positive examples. The net quickly learned to give high scores for these, but then I tested on bad situations it still reported high scores. Maybe a little naive mistake, but you have to learn sometimes.
A very common example is testing of software. Usually, people pay much attention on testing the positive cases, and verifying that they work as they should. Less time is spent on testing things that should not work, sometimes resulting in programs that generates answers when it should not. The problem here is that testing the positive cases usually consists of a limited set, while the negative cases are almost infinite.