A fifth method: you could use unsupervised learning to learn some multidimensional representations for the images, and then understand the classifier’s diversity in terms of the diversity of these multidimensional representations.
I think it depends on your method. Most current methods would probably heavily benefit from a diverse dataset, as they tend to be based on somehow “compressing” or “clustering” the images. However, it seems like they should still work to an extent on the monotonous husky/lion datasets, just not as much.
However, if one is willing to go beyond current methods, then I feel like it should be possible to make unsupervised representation learning methods that are better able to deal with monotonous datasets. I’m sort of playing with some ideas for this in my spare time, because it seems like a promising approach for me, though I haven’t developed them much yet.
A fifth method: you could use unsupervised learning to learn some multidimensional representations for the images, and then understand the classifier’s diversity in terms of the diversity of these multidimensional representations.
Interesting; would you need an unlabelled dataset to do this, or would the lion and husky sets be sufficient?
I think it depends on your method. Most current methods would probably heavily benefit from a diverse dataset, as they tend to be based on somehow “compressing” or “clustering” the images. However, it seems like they should still work to an extent on the monotonous husky/lion datasets, just not as much.
However, if one is willing to go beyond current methods, then I feel like it should be possible to make unsupervised representation learning methods that are better able to deal with monotonous datasets. I’m sort of playing with some ideas for this in my spare time, because it seems like a promising approach for me, though I haven’t developed them much yet.