I have two observations, one personal and one general:
Once, I tried to apply artificial neural nets on the task to evaluate positional situations in the game of Go. I did a very basic error, which was to train the net only on positive examples. The net quickly learned to give high scores for these, but then I tested on bad situations it still reported high scores. Maybe a little naive mistake, but you have to learn sometimes.
A very common example is testing of software. Usually, people pay much attention on testing the positive cases, and verifying that they work as they should. Less time is spent on testing things that should not work, sometimes resulting in programs that generates answers when it should not. The problem here is that testing the positive cases usually consists of a limited set, while the negative cases are almost infinite.
I have two observations, one personal and one general:
Once, I tried to apply artificial neural nets on the task to evaluate positional situations in the game of Go. I did a very basic error, which was to train the net only on positive examples. The net quickly learned to give high scores for these, but then I tested on bad situations it still reported high scores. Maybe a little naive mistake, but you have to learn sometimes.
A very common example is testing of software. Usually, people pay much attention on testing the positive cases, and verifying that they work as they should. Less time is spent on testing things that should not work, sometimes resulting in programs that generates answers when it should not. The problem here is that testing the positive cases usually consists of a limited set, while the negative cases are almost infinite.