Once upon a time—I’ve seen this story in several versions and several places, sometimes cited as fact, but I’ve never tracked down an original source—once upon a time, I say, the US Army wanted to use neural networks to automatically detect camouflaged enemy tanks.
Probably apocryphal. I haven’t been able to track this down, despite having heard the story both in computer ethics class and at academic conferences.
I poked around in Google Books; the earliest clear reference I found was the 2000 Cartwright book Intelligent data analysis in science, which seems to attribute it to the TV show Horizon. (No further info—just snippet view.)
Like I had a friend in Italy who had a perceptron that looked at a visual… it had visual inputs. So, he… he had scores of music written by Bach of chorales and he had scores of chorales written by music students at the local conservatory. And he had a perceptron—a big machine—that looked at these and those and tried to distinguish between them. And he was able to train it to distinguish between the masterpieces by Bach and the pretty good chorales by the conservatory students. Well, so, he showed us this data and I was looking through it and what I discovered was that in the lower left hand corner of each page, one of the sets of data had single whole notes. And I think the ones by the students usually had four quarter notes. So that, in fact, it was possible to distinguish between these two classes of… of pieces of music just by looking at the lower left… lower right hand corner of the page. So, I told this to the… to our scientist friend and he went through the data and he said: ‘You guessed right. That’s… that’s how it happened to make that distinction.’ We thought it was very funny. A similar thing happened here in the United States at one of our research institutions. Where a perceptron had been trained to distinguish between—this was for military purposes—It could… it was looking at a scene of a forest in which there were camouflaged tanks in one picture and no camouflaged tanks in the other. And the perceptron—after a little training—got… made a 100% correct distinction between these two different sets of photographs. Then they were embarrassed a few hours later to discover that the two rolls of film had been developed differently. And so these pictures were just a little darker than all of these pictures and the perceptron was just measuring the total amount of light in the scene. But it was very clever of the perceptron to find some way of making the distinction.
While the Italian story seems to be true since Minsky says he knew the Italian and personally spotted how the neural net was overfitting, he just recounts the usual urban legend as ‘an institution’; there is a new twist, though, that this time it’s the exposure of the photographic film rather than the forest or clouds or something. I remain suspicious of the tank story because it has all the hallmarks of an urban legend—it’s a cute convenient story which everyone seems to know and have been told by someone, but when you trace citations, they never end up anywhere and never get more specific, but the various versions keep mutating (film? night vs day? grass vs forest?).
At the end of the talk I stood up and made the comment that it was obvious that the picture with the tanks was made on a sunny day while the other picture (of the same field without the tanks) was made on a cloudy day. I suggested that the “neural net” had merely trained itself to recognize the difference between a bright picture and a dim picture.
This is still not a source because it’s a recollection 50 years later and so highly unreliable, and even at face value, all Fredkin did was suggest that the NN might have picked up on a lighting difference; this is not proof that it did, much less all the extraneous details of how they had 50 photos in this set and 50 in that and then the Pentagon deployed it and it failed in the field (and what happened to it being set in the 1980s?). Classic urban legend/myth behavior: accreting plausible entertaining details in the retelling.
Probably apocryphal. I haven’t been able to track this down, despite having heard the story both in computer ethics class and at academic conferences.
I poked around in Google Books; the earliest clear reference I found was the 2000 Cartwright book Intelligent data analysis in science, which seems to attribute it to the TV show Horizon. (No further info—just snippet view.)
Here is one supposedly from 1998, though it’s hardly academic.
A Redditor provides not one but two versions from “Embarrassing mistakes in perceptron research”, Marvin Minsky, recorded 29-31 Jan 2011:
While the Italian story seems to be true since Minsky says he knew the Italian and personally spotted how the neural net was overfitting, he just recounts the usual urban legend as ‘an institution’; there is a new twist, though, that this time it’s the exposure of the photographic film rather than the forest or clouds or something. I remain suspicious of the tank story because it has all the hallmarks of an urban legend—it’s a cute convenient story which everyone seems to know and have been told by someone, but when you trace citations, they never end up anywhere and never get more specific, but the various versions keep mutating (film? night vs day? grass vs forest?).
Another version is provided by Ed Fredkin via Eliezer Yudkowsky in http://lesswrong.com/lw/7qz/machine_learning_and_unintended_consequences/
This is still not a source because it’s a recollection 50 years later and so highly unreliable, and even at face value, all Fredkin did was suggest that the NN might have picked up on a lighting difference; this is not proof that it did, much less all the extraneous details of how they had 50 photos in this set and 50 in that and then the Pentagon deployed it and it failed in the field (and what happened to it being set in the 1980s?). Classic urban legend/myth behavior: accreting plausible entertaining details in the retelling.
I’ve compiled and expanded all the examples at https://www.gwern.net/Tanks