I believe that Marvin Minsky is the origin of the story. He tells it in the second half of the 3 minute video Embarrassing mistakes in perceptron research. This version of the story is not so simple as sunny/cloudy, but that the two rolls of film were developed differently, leaving to a uniform change in brightness.
The first half of the video is a similar story about distinguishing musical scores using the last note in the lower right corner. Somewhere,* quite recently, I heard people argue that this story was more credible because Minsky makes a definite claim of direct involvement and maybe the details are a little more concrete.
[How did I find this? The key was knowing the attribution to Minsky. It comes up immediately searching for “Minsky pictures of tanks.” But “Minsky tank neural net” is a poor search because of the contributions of David Tank. Note that the title of the video contains the word “perceptron,” suggesting that the story is from the 60s.]
* Now that I read Gwern’s comment, it must have been his discussion on Reddit a month ago.
Added, a week later: I’m nervous that Minsky really is the origin. When I first wrote this, I thought I had seen him tell the story in two places.
I would say that given the Minsky story and how common a problem overfitting is, I believe something at least very similar to the tank story did happen, and if it didn’t, then there nevertheless real problems with neural nets overfitting.
(That said, I think modern deep nets may get too much of a bad rap on this issue. Yes, they might do weird things like focusing on textures or whatever is going on in the adversarial examples, but they still recognize very well out-of-sample, and so they are not simply overfitting to the test set like in these old anecdotes. Their problems are different.)
This isn’t an example of overfitting, but of the training set not being iid. You wanted a random sample of pictures of tanks, but you instead got a highly biased sample that is drawn from a different distribution than the test set.
“This isn’t an example of overfitting, but of the training set not being iid.”
Upvote for the first half of that sentence, but I’m not sure how the second applies. The set of tanks is iid, the issue that the creators of the training set allowed tank/not tank to be correlated to an extraneous variable. It’s like having a drug trial where the placebos are one color and the real drug is another.
I guess I meant it’s not iid from the distribution you really wanted to sample. The hypothetical training set of all possible pictures of tanks, but you just sampled the ones that were during daytime.
I’m not sure you understand what “iid” means. I t means that each is drawn from the same distribution, and each sample is independent of the others. The term “iid” isn’t doing any work in your statement; you could just same “It’s not from the distribution you really want to sample”, and it would be just as informative.
That seems to be the go to story for NNs. I remember hearing it back in grad school. Though now I’m wondering if it is just an urban legend.
Some cursory googling shows others wondering the same thing.
Any have an actual cite for this? Or if not an actual cite, at least you had heard a concrete cite for it once?
I believe that Marvin Minsky is the origin of the story. He tells it in the second half of the 3 minute video Embarrassing mistakes in perceptron research. This version of the story is not so simple as sunny/cloudy, but that the two rolls of film were developed differently, leaving to a uniform change in brightness.
The first half of the video is a similar story about distinguishing musical scores using the last note in the lower right corner. Somewhere,* quite recently, I heard people argue that this story was more credible because Minsky makes a definite claim of direct involvement and maybe the details are a little more concrete.
[How did I find this? The key was knowing the attribution to Minsky. It comes up immediately searching for “Minsky pictures of tanks.” But “Minsky tank neural net” is a poor search because of the contributions of David Tank. Note that the title of the video contains the word “perceptron,” suggesting that the story is from the 60s.]
* Now that I read Gwern’s comment, it must have been his discussion on Reddit a month ago.
Added, a week later: I’m nervous that Minsky really is the origin. When I first wrote this, I thought I had seen him tell the story in two places.
Previous discussion: http://lesswrong.com/lw/td/magical_categories/4v4a
I would say that given the Minsky story and how common a problem overfitting is, I believe something at least very similar to the tank story did happen, and if it didn’t, then there nevertheless real problems with neural nets overfitting.
(That said, I think modern deep nets may get too much of a bad rap on this issue. Yes, they might do weird things like focusing on textures or whatever is going on in the adversarial examples, but they still recognize very well out-of-sample, and so they are not simply overfitting to the test set like in these old anecdotes. Their problems are different.)
This isn’t an example of overfitting, but of the training set not being iid. You wanted a random sample of pictures of tanks, but you instead got a highly biased sample that is drawn from a different distribution than the test set.
“This isn’t an example of overfitting, but of the training set not being iid.”
Upvote for the first half of that sentence, but I’m not sure how the second applies. The set of tanks is iid, the issue that the creators of the training set allowed tank/not tank to be correlated to an extraneous variable. It’s like having a drug trial where the placebos are one color and the real drug is another.
I guess I meant it’s not iid from the distribution you really wanted to sample. The hypothetical training set of all possible pictures of tanks, but you just sampled the ones that were during daytime.
I’m not sure you understand what “iid” means. I t means that each is drawn from the same distribution, and each sample is independent of the others. The term “iid” isn’t doing any work in your statement; you could just same “It’s not from the distribution you really want to sample”, and it would be just as informative.