The space of cases to consider can be large in many dimensions. The countable limit of a sequence of extensions needs not be a fixed point of the magical improvement oracle.
Indeed. We may need to put a measure on the set of cases and make a generalization guarantee that refers to solving X% of remaining cases. That would be a much stronger generalization guarantee.
The style of counter-example is to construct two settings (“models” in the lingo of logic) A and B with same labeled easy set (and context made available to the classifier), where the correct answer for some datapoint x differs in both settings
I appreciate the suggestion but I think that line of argument would also conclude that statistical learning is impossible, no? When I give a classifier a set of labelled cat and dog images and ask it to classify which are cats and which are dogs, it’s always possible that I was really asking some question that was not exactly about cats versus dogs, but in practice it’s not like that.
Also, humans do communicate about concepts with one another, and they eventually “get it” with respect to each other’s concept boundaries, and it’s possible to see that someone “got it” and trust that they now have the same concept that I do. So it seems possible to learn concepts in a trustworthy way from very small datasets, though it’s not a very “black box” kind of phenomenon.
Indeed. We may need to put a measure on the set of cases and make a generalization guarantee that refers to solving X% of remaining cases. That would be a much stronger generalization guarantee.
I appreciate the suggestion but I think that line of argument would also conclude that statistical learning is impossible, no? When I give a classifier a set of labelled cat and dog images and ask it to classify which are cats and which are dogs, it’s always possible that I was really asking some question that was not exactly about cats versus dogs, but in practice it’s not like that.
Also, humans do communicate about concepts with one another, and they eventually “get it” with respect to each other’s concept boundaries, and it’s possible to see that someone “got it” and trust that they now have the same concept that I do. So it seems possible to learn concepts in a trustworthy way from very small datasets, though it’s not a very “black box” kind of phenomenon.