This is interesting to me, and I had not known that such research has been done.
I’ve heard that there’s a consistent problem in machine learning of people overtraining their algorithms to particular data sets. The diversity of examples in the paper appears to be impressive, but it could be that the algorithm would break if given images that would appear to us to be qualitatively similar to the ones displayed.
I think that Gromov may not have expressed himself very clearly, and his remarks may not have been intended to be taken literally. Consider the many starfish in this picture. By looking at the photo, one can infer that any given star-fish has five-fold symmetry with high probability, even though some of the ones in the distance wouldn’t look like they had five-fold symmetry (or even look like star-fish at all) if they were viewed in isolation. I don’t think that existing AI has the capacity to make these sorts of inferences at a high level of generality.
I think #3 is the real issue. Most of the starfishes in that picture aren’t 5-fold symmetric, but a person who had never seen starfish before would first notice “those all look like variations of a general form” and then “that general form is 5-fold symmetric”. I don’t know of any learning algorithms that do this, but I also don’t know what to search for.
So you’re probably right that it’s an issue of “pattern recognition ability”, but it’s not as bad as you originally said.
Are you sure? This sounds possible.
Possible in principle, but my understanding of the current state of AI is that computer programs are nowhere near being able to do this.
Are you saying we can’t make programs that would identify portions of an image that are highly fold fold symmetric? This seems really unlikely to me.
Some looking turns up a paper on Skewed Rotation Symmetry Group Detection which appears to do this.
This is interesting to me, and I had not known that such research has been done.
I’ve heard that there’s a consistent problem in machine learning of people overtraining their algorithms to particular data sets. The diversity of examples in the paper appears to be impressive, but it could be that the algorithm would break if given images that would appear to us to be qualitatively similar to the ones displayed.
I think that Gromov may not have expressed himself very clearly, and his remarks may not have been intended to be taken literally. Consider the many starfish in this picture. By looking at the photo, one can infer that any given star-fish has five-fold symmetry with high probability, even though some of the ones in the distance wouldn’t look like they had five-fold symmetry (or even look like star-fish at all) if they were viewed in isolation. I don’t think that existing AI has the capacity to make these sorts of inferences at a high level of generality.
I think #3 is the real issue. Most of the starfishes in that picture aren’t 5-fold symmetric, but a person who had never seen starfish before would first notice “those all look like variations of a general form” and then “that general form is 5-fold symmetric”. I don’t know of any learning algorithms that do this, but I also don’t know what to search for.
So you’re probably right that it’s an issue of “pattern recognition ability”, but it’s not as bad as you originally said.