On the other hand, improvements on ImageNet (the datasets alexnet excelled on at the time) itself are logarithmic rather than exponential and at this point seem to have reached a cap at around human level ability or a bit less (maybe people got bored of it?)
The best models are more accurate than the ground-truth labels.
Yes, and no. We ask whether recent progress on the ImageNet classification benchmark continues to represent meaningful generalization, or whether the community has started to overfit to the idiosyncrasies of its labeling procedure. We therefore develop a significantly more robust procedure for collecting human annotations of the ImageNet validation set. Using these new labels, we reassess the accuracy of recently proposed ImageNet classifiers, and find their gains to be substantially smaller than those reported on the original labels. Furthermore, we find the original ImageNet labels to no longer be the best predictors of this independently-collected set, indicating that their usefulness in evaluating vision models may be nearing an end. Nevertheless, we find our annotation procedure to have largely remedied the errors in the original labels, reinforcing ImageNet as a powerful benchmark for future research in visual recognition.
Figure 7. shows that model progress is much larger than the raw progression of ImageNet scores would indicate.
The best models are more accurate than the ground-truth labels.
Are we done with ImageNet?
https://arxiv.org/abs/2006.07159
Figure 7. shows that model progress is much larger than the raw progression of ImageNet scores would indicate.