I agree that there will be cases where we have ontological crises where it’s not clear what the answer is, i.e whether the mirrored dog counts as “healthy”. However, I feel like the thing I’m pointing at is that there is some sort of closure of any given set of training examples where, for some fairly weak assumptions, we can know that everything in this expanded set is “definitely not going too far”. As a trivial example, anything that is a direct logical consequence of anything in the training set would be part of the completion. I expect any ELK solutions to look something like that. This corresponds directly to the case where the ontology identification process converges to some set smaller than the entire set of all cases.
I agree that there will be cases where we have ontological crises where it’s not clear what the answer is, i.e whether the mirrored dog counts as “healthy”. However, I feel like the thing I’m pointing at is that there is some sort of closure of any given set of training examples where, for some fairly weak assumptions, we can know that everything in this expanded set is “definitely not going too far”. As a trivial example, anything that is a direct logical consequence of anything in the training set would be part of the completion. I expect any ELK solutions to look something like that. This corresponds directly to the case where the ontology identification process converges to some set smaller than the entire set of all cases.