Why is the ingenuity of human mathematicians one of the last things that you’d expect to see before seeing human-level-or-smarter AI? My intuition is that it’s one of the earlier things that you’d expect to see. Mikhail Gromov wrote:
The mathematical ability of each person’s brain by far exceeds those of the greatest geniuses of all time. Nobody, given the input the brain starts with, could be able to arrive at such a level of abstraction, for instance, as the five-fold symmetry (for example, a starfish), which you, or rather your brain, recognizes instantaneously regardless of a particular size, shape, or color of an object.
That’s a different meaning of the term “mathematical ability”. In this context, you should read it as “calculating ability”, and computers are pretty good at that—although still not as good as our brains.
It was not intended to imply that low-level brainware is any good at abstract mathematics.
Where did you get your interpretation from, and why do you think that yours is more accurate than mine? :-)
I believe that he was referring to the brain’s pattern recognition ability rather than calculating ability. Existing supercomputers have a lot of calculating ability, but they can’t recognize five-fold symmetry regardless of a particular size, shape or color of an object.
This is interesting to me, and I had not known that such research has been done.
I’ve heard that there’s a consistent problem in machine learning of people overtraining their algorithms to particular data sets. The diversity of examples in the paper appears to be impressive, but it could be that the algorithm would break if given images that would appear to us to be qualitatively similar to the ones displayed.
I think that Gromov may not have expressed himself very clearly, and his remarks may not have been intended to be taken literally. Consider the many starfish in this picture. By looking at the photo, one can infer that any given star-fish has five-fold symmetry with high probability, even though some of the ones in the distance wouldn’t look like they had five-fold symmetry (or even look like star-fish at all) if they were viewed in isolation. I don’t think that existing AI has the capacity to make these sorts of inferences at a high level of generality.
I think #3 is the real issue. Most of the starfishes in that picture aren’t 5-fold symmetric, but a person who had never seen starfish before would first notice “those all look like variations of a general form” and then “that general form is 5-fold symmetric”. I don’t know of any learning algorithms that do this, but I also don’t know what to search for.
So you’re probably right that it’s an issue of “pattern recognition ability”, but it’s not as bad as you originally said.
Why is the ingenuity of human mathematicians one of the last things that you’d expect to see before seeing human-level-or-smarter AI? My intuition is that it’s one of the earlier things that you’d expect to see. Mikhail Gromov wrote:
That’s a different meaning of the term “mathematical ability”. In this context, you should read it as “calculating ability”, and computers are pretty good at that—although still not as good as our brains.
It was not intended to imply that low-level brainware is any good at abstract mathematics.
Where did you get your interpretation from, and why do you think that yours is more accurate than mine? :-)
I believe that he was referring to the brain’s pattern recognition ability rather than calculating ability. Existing supercomputers have a lot of calculating ability, but they can’t recognize five-fold symmetry regardless of a particular size, shape or color of an object.
Are you sure? This sounds possible.
Possible in principle, but my understanding of the current state of AI is that computer programs are nowhere near being able to do this.
Are you saying we can’t make programs that would identify portions of an image that are highly fold fold symmetric? This seems really unlikely to me.
Some looking turns up a paper on Skewed Rotation Symmetry Group Detection which appears to do this.
This is interesting to me, and I had not known that such research has been done.
I’ve heard that there’s a consistent problem in machine learning of people overtraining their algorithms to particular data sets. The diversity of examples in the paper appears to be impressive, but it could be that the algorithm would break if given images that would appear to us to be qualitatively similar to the ones displayed.
I think that Gromov may not have expressed himself very clearly, and his remarks may not have been intended to be taken literally. Consider the many starfish in this picture. By looking at the photo, one can infer that any given star-fish has five-fold symmetry with high probability, even though some of the ones in the distance wouldn’t look like they had five-fold symmetry (or even look like star-fish at all) if they were viewed in isolation. I don’t think that existing AI has the capacity to make these sorts of inferences at a high level of generality.
I think #3 is the real issue. Most of the starfishes in that picture aren’t 5-fold symmetric, but a person who had never seen starfish before would first notice “those all look like variations of a general form” and then “that general form is 5-fold symmetric”. I don’t know of any learning algorithms that do this, but I also don’t know what to search for.
So you’re probably right that it’s an issue of “pattern recognition ability”, but it’s not as bad as you originally said.