I was talking about the same architecture and training procedure. AI design space is high dimensional. What I am arguing is that the set of designs that are likely to be made in the real world is a long and skinny blob. To perfectly pinpoint a location, you need many coords. But to gesture roughly, just saying how far along it is is good enough. You need multiple coordinates to pinpoint a bug on a breadstick, but just saying how far along the breadstick it is will tell you where to aim a flyswatter.
There are architectures that produce bad results on most image classification tasks, and ones that reliably produce good results. (If an algorithm can reliably tell dogs from squirrels with only a few examples of each, I expect it can also tell cats from teapots. To be clear, I am talking about different neural nets with the same architecture and training procedure. )
I was talking about the same architecture and training procedure. AI design space is high dimensional. What I am arguing is that the set of designs that are likely to be made in the real world is a long and skinny blob. To perfectly pinpoint a location, you need many coords. But to gesture roughly, just saying how far along it is is good enough. You need multiple coordinates to pinpoint a bug on a breadstick, but just saying how far along the breadstick it is will tell you where to aim a flyswatter.
There are architectures that produce bad results on most image classification tasks, and ones that reliably produce good results. (If an algorithm can reliably tell dogs from squirrels with only a few examples of each, I expect it can also tell cats from teapots. To be clear, I am talking about different neural nets with the same architecture and training procedure. )