Proof: The only situation in which the iteration scheme does not update the decision boundary B is when we fail to find a predictor that does useful computation relative to E. By hypothesis, the only way this can happen is if E does not contain all of E0 or E = C. Since we start with E0 and only grow the easy set, it must be that E = C.
(emphasis mine)
To me it looks like the emphasized assumption (that it’s always possible to find a predictor that does useful computation) is the main source of your surprising result, as without it the iteration would not be possible.
That assumption strikes me as too strong; it is not realistic, since it requires that either information is created out of nowhere or that the easy set (plus maybe the training setup) contains all information about the full set. It also doesn’t seem necessary for a solution to ELK to satisfy this assumption (curious if you disagree?).
(emphasis mine)
To me it looks like the emphasized assumption (that it’s always possible to find a predictor that does useful computation) is the main source of your surprising result, as without it the iteration would not be possible.
That assumption strikes me as too strong; it is not realistic, since it requires that either information is created out of nowhere or that the easy set (plus maybe the training setup) contains all information about the full set. It also doesn’t seem necessary for a solution to ELK to satisfy this assumption (curious if you disagree?).