Understanding Machine Learning (III)
This is the final part of a 3-part series on the fundamentals of Machine Learning as defined by this book (although it will transition into a longer sequence about the remaining, more practical part of the book). If you’re interested, begin here.
Nonuniform Learning
Recall the definition of PAC learnability:
A hypothesis class is called PAC learnable iff there exists a learner and a function such that, for any probability distribution over , any gap , and any failure probability , if the training sequence consists of at least i.i.d. examples sampled via , then with probability at least over the choice of , it holds that for all .
One way to weaken this definition so that it applies to a wider range of hypothesis classes is to allow the function which upper-bounds the sample complexity to also depend on the hypothesis we are competing with – so rather than upper-bounding the difference of and , we upper-bound the difference of and separately for each . We have the following formal definition:
A hypothesis class is called nonuniformly learnable iff there exists a learner and a function such that, for any probability distribution over , any gap , any failure probability , and any hypothesis , if the training sequence consists of at least i.i.d. examples sampled via , then with probability at least over the choice of , it holds that .
(There is also a second relaxation, where the sample complexity is allowed to depend on the probability distribution in addition to the hypothesis and the gap and failure probability. It is called consistency, but it turns out to be too weak to be useful. One can prove that every countable hypothesis class is consistent, and that is a successful “learner.”)
For an example of a hypothesis class that is nonuniformly learnable but not PAC learnable, consider the domain , the usual binary label class , and the hypothesis class of all polynomial classifiers – that is, all classifiers such that there exists a polynomial function such that for all . Arguing with the VC-dimension, one can see that class of polynomial classifiers of degree at most is PAC learnable (even though it’s infinite), and that the entire class of polynomial classifiers is not PAC learnable. While this class is nonuniformly learnable (this will follow from what we prove this chapter), is no longer a successful learner. This fact is easy to verify: given any finite amount of points, would always choose a polynomial of sufficiently high degree in order to classify all of them correctly. As long as no two points are exactly the same, this is always doable – for example, one can choose the polynomial which will be 0 at every and negative everywhere else. You might notice that this is precisely the argument that proves that the class has infinite VC-dimension. It is also a classic example of overfitting – it is not at all going to lead to a small estimation error. In fact, this implementation would mean that behaves exactly like . A more sophisticated learning algorithm is needed.
Suppose we have such a nonuniform learner . This will mean that, if we fix some distribution generating the points , then for any one hypothesis , we can bound the number of sample points we need such that the predictor will compete well with the true error of .
The learner will have to be biased toward lower degree polynomials so as not to behave like . Therefore, if is a predictor corresponding to a polynomial with small degree, we probably won’t need many samples to compete with it. On the other hand, if comes from a polynomial of degree 50 and the distribution is such that is an excellent predictor, we will need a very large number of sample points before is convinced enough to seriously consider high-degree polynomials and outputs something almost as good as . As the degree of polynomials we’re competing with goes up, the number of sample points we need will go up, and therefore there is no uniform function (namely uniform across all other hypotheses) that can bound the sample complexity.
If is such that the optimal classifier corresponds to a polynomial of small degree, this will not be an issue – then, while we don’t get the theoretical assurance of being able to compete with classifiers from high-degree polynomials, they will, in fact, have large real error (although excellent empirical error), and we will compete with every hypothesis in our class fairly quickly. In that case, the theoretical bound will be pessimistic. However, recall that nonuniform learning is about guarantees that hold for all distributions simultaneously.
I’ve previously stated that there are more sophisticated approaches to the evidence-vs-priors problem than to commit to a hypothesis class ahead of time. The learner will be an example.
Now recall the definition of uniform convergence:
A hypothesis class has the uniform convergence property iff there is a function such that, for every joint probability distribution over , every gap and every confidence level , if , then with probability at least over the choice of , the inequality holds for all .
Notice that we called the three bounding functions in case of PAC learnability (for sample complexity), (for uniform convergence) and for nonuniform learning. Notice also that, unlike the other two, uniform convergence bounds the difference between real and empirical error in both directions (this will become important in a bit).
We’ve found two properties which are equivalent to PAC learnability: the uniform convergence property and having a finite VC-dimension. We now introduce a property which is equivalent to nonuniform learnability, namely
A hypothesis class is nonuniformly learnable iff there exists a family of hypothesis classes, each of which PAC learnable, such that .
There are two directions to prove in order to establish this result; we will prove both of them. The first one is pretty easy.
“” Suppose is nonuniformly learnable. Let be the function from the definition. For each , set . Then, , since for all , we have . Moreover, given any , if had infinite VC-dimension, we would find a probability distribution over such that the expected value of is at least . (Apply the No-Free-Lunch argument.) In particular, there would be at least a chance that , contradicting the fact that for all . Therefore, has finite VC-dimension and is thus PAC learnable by the fundamental theorem of statistical learning.
The interesting thing about this proof is that getting both the gap and the failure probability to indirectly implies that they can also go arbitrarily low.
The reverse direction is good deal harder. We shall construct a new learner that will be our . The abbreviation SRM stands for Structural Risk Minimization, and the idea is that we take into account both how well a hypothesis performs on the training data (like ERM does) and how easily we can bound the estimation error of its hypothesis class.
Structural Risk Minimization
To begin, we suppose that we are given a family of hypothesis classes, each of which PAC learnable. Since each is PAC learnable, each also has the uniform convergence property, and we can find a family of functions such that bounds the sample complexity for class . Among our infinitely many classes, we we will need to prioritize some over others. To do this, we define a weighting function ; i.e. a function with the property that . The obvious choice here is , although many other choices are also possible; it will not matter for the theoretical construction.
To achieve a global guarantee, we shall query each class for some “local” guarantee in the form of some and . The gap will not be the same for each class, so we define a function that, given a certainty and a sample size , will return the best (i.e. lowest) gap between real and empirical error that can guarantee for and . In symbols, we set .
We want to satisfy the definition of nonuniform convergence, so we assume that we are given two values as well as a training sequence . Now in order to implement , we could, for each , compute the gap which guarantees with failure probability , namely , as well as the minimal empirical error achievable with that class, namely . Then, we could output the hypothesis that minimizes the sum of these two values (across all of ). However, this approach will not allow a proof that we satisfy the definition of nonuniformly learning because we will not be able to upper-bound our risk of failure by . Rather, we can upper-bound it by for each class, but then then only one of these infinitely many guarantees has to fail for our uniform upper-bound to fail. (Despite its name, the in the definition of nonuniform learning is very much uniform, i.e., it applies to the entire class .)
To remedy this, we will demand successively stronger safety guarantees, and we use this as our mechanism to privilege earlier classes. Namely, we do it as explained above except that we compute the gap which can guarantee with safety rather than just , i.e., the value . Clearly is a lot smaller than , so we get more confident bounds (and particularly confident bounds for more distant classes). Then we still compute the empirical risk, add both numbers, and return the global minimum. To write it down as a formula, will output a hypothesis in the set.
With these progressively more confident bounds, the probability that they all hold is .
With the definition in place, we can proveby showing that as constructed above is a nonuniform learner of the class . For given and , if we set to be the index of the (first) class that contains the predictor , i.e., , then we can define , which will guarantee a gap between and of at most by the following hand-written proof:
The first inequality holds because it simply writes out the gap which guarantees. The second inequality holds because this is how we defined . The third inequality holds because and (make sure you understand why). In our case, we have that . And the fourth inequality – recall that uniform convergence bounds the difference between empirical and real error in both directions, i.e. it also holds that . Now the second summand comes out for the same reason as above.
Note that, in the final (fourth) inequality, we bound the empirical error via the real error plus some maximum bound. This is necessary for the proof, but unlikely to make sense for a real instance – the hypothesis is in a fairly “far out” class, so it is presumably fairly complex, and likely to overfit the data, i.e., have low empirical error but high real error. Whenever the best predictor is in an early class, all of the later classes are likely to perform poorly in the real world (although they will often perform better on the training data, at least if the classes are actually ordered by increasing complexity). At least this is so in the case of polynomial predictors.
Minimum Description Length
Suppose we are interested in learning a countable hypothesis class . Without any other requirements, is nonuniformly learnable, being the countable collection of singleton classes, i.e. . The idea now is that we can use a “natural” weighting function which is induced by the minimum description length of each predictor in the class. Formally, we define a description language as a subset . If the description language is prefix-free, i.e. if there are no two descriptions such that is a prefix of , we can define a weighting function by , where is the description of . To see that this defines a proper weighting function – i.e. that the total sum of weights equals at most 1 – note that can view the set of all descriptions as a probability space, i.e. , and assign each description a probability. Specifically, consider the following experiment:
Repeatedly throw a coin; whenever it lands heads, write down a 1, whenever it lands tails, write down a 0. Stop when the output string equals some element in .
Since this language is prefix-free, each element is assigned a probability in this way – and as we know, probabilities sum up to at most 1. Then since , i.e. the function just assigns these probabilities as weights, this shows that it is a proper weighting function.
This leads to a natural definition of an alternative learner , which is simply following the SRM rule while using as the weighting function. This construction is closely related to Occam’s Razor and Kolmogorov Complexity.
Computational Complexity of Learning
A field with fairly strong similarity to Machine Learning is statistics, because both are, in some sense, about learning from data. But in addition to theoretical limits, Machine Learning also has to concern itself with the annoying reality of bounded runtime. Thus, even though we’ve previously pretended that the behavior of and is fully determined by their subscript, this is not really so: the subscript is merely a mission statement, and how this is actually achieved has been left unsaid. Different approaches may have drastically different runtimes in practice.
Instead of reinventing the wheel, we will borrow the relevant ideas from the existing literature on computational complexity theory.
Computational Complexity Theory: the Landau notation
One generally cannot make precise statements about the runtime of an algorithm, because it depends on the processor speed of the machine it runs on (a problem which persists even if the algorithm is fully specified). Instead, one uses “asymptotic” bounds. Let’s demonstrate how this works with an example.
Consider the algorithm. It takes as input a sequence of points such that for all and there is a total order on (so given any two elements, we can say which one is larger). It goes through the list, remembers the smallest element, puts it at the beginning, goes through the list again (this time starting with the second element), remembers the smallest element, and puts it at the second spot, then goes through the list again (this time starting with the third element), puts the smallest element of those in third place and so on. It’s the slowest not-intentionally-bad sorting algorithm that I know of, but it’s also the simplest, and it’s probably a decent description of how a human would sort a list.
So how long does need to sort a list of elements? Well, we sort of need operations to find the smallest element, then for the second smallest and so on, leading to . And then some operations to swap elements, but it’s not obvious how many. And maybe there’s some overhead, too.
To have something rigorous, we would like to say that it’s around (or actually, not significantly larger than) in a principled way. The approach taken here is the notation or Landau notation. Let be the function describing the runtime of as a function of the input size of the list on any machine, and let be given by , where outputs time in seconds. Then we want to write that to say exactly that ” is not significantly larger than ”, even though we don’t know exactly how large gets (because we don’t know for which machine it measures the runtime). The formal definition goes like this:
So the idea is that, regardless of how slow the machine is that is measuring runtime for, there should be some such that every operation on this machine takes at most seconds. Obviously, for any modern machine, many thousands of operations might be performed in one second, so is quite likely to do the job. But even if we had declared that measures nano-seconds instead, there would still always be such a ; one could just take the current one and multiply it by . Thus, if such a exists, then should be bounded by times . Furthermore, there might be some constant overhead, so we don’t demand that this is true for all input values, only all input values from some point onward.
One often sees as shorthand for [ where ]. One also often sees rather than even though that notation is nonsensical (function equals set of functions?) and not even shorter. Far worse still is the inexplicable which offends me on a remarkably deep level.
To apply these ideas to machine learning, we will need to make three adjustments. 1) we need functions that take real numbers as input, rather than natural numbers. 2) we will need to compare their output as their parameter goes to 0, rather than as their parameter grows. And 3), we will need functions of several variables.
The first two are straight-forward. For , one can simply alter the definition into
The third one appears to raise at least one question – if turns to a vector , then what does the condition “” turn into? The answer is ” for some ”, so it will be enough if one of the coordinates gets small.
Last words on Landau in general: since for and , this notation is fairly imprecise. To have something that bounds runtime in both directions, one can use the symbol which is defined as.
Applying Landau to Machine Learning tasks
In the context of Machine Learning, one has the problem that there is no obvious input size for a learning task. The size of the training set is a non-starter because having more training data makes a problem easier, not harder. What one does instead is to use the gap and confidence as parameters. We have the following formal definition:
Given a function , a learning task defined via and and and , and a learner , we say that solves the task in time iff there exists a constant such that, for all , if is given a randomly sampled training sequence of sufficient length, then
1. always puts out a predictor in at most seconds
2. Whenever is given any domain point , it puts out a label in at most seconds
3. The equation holds with probability at least over the choice of
The first condition is the logical place we’ve been working toward: it says that can’t take too long to output a predictor. The second condition makes sure that doesn’t cheat – if it wasn’t for this rule, then could offload all computational work to the predictor. To be precise, we could define as “memorize the training sequence, then output the predictor that, upon receiving a domain point , runs the code of , applies that to and returns its output”. Clearly, has excellent runtime, but it’s probably not a very useful way to go about defining a predictor. Alas, with the predictor itself being bound by the same term as the learner, this strategy no longer helps. (Even if it could somehow offload half of all work to the predictor, taking half as long doesn’t change the complexity class.)
Finally, the third condition is just the usual PAC condition.
There are some cases in which can be implemented with a reasonable runtime complexity. But usually it can’t, and that’s why this is only the end of part I and not of the book. There are many different hypothesis classes that are of interest, and the ability to implement efficiently on them is generally desirable.
Can minimal description length be over S in addition to over the set of hypotheses classes?
Well I don’t remember anything in detail, but I don’t believe so; I don’t think you’d want to have a restriction on the training data.