I don’t think this is why improper linear models work. If you have a large number of variables, most of which are irrelevant in the sense of being uncorrelated with the outcome, then the irrelevant variables will be randomly assigned to +1 or −1 weights and will on average cancel out, leaving the signal from the relevant variables who do not cancel each other out.
So even without an implicit prior from an expert relevance selection effect or any explicit prior enforcing sparsity, you would still get good performance from improper linear models. (And IIRC, when you use something like ridge regression or Laplacian priors, the typical result, especially in high-dimensional settings like genomics or biology, most of the variables do drop out or get set to zero, so even in these ‘enriched’ datasets, most of the variables are irrelevant. What’s sauce for the goose is sauce for the gander.)
Adding in more irrelevant variables does change things quantitatively by lowering power due to increased variance and requiring more data, but I don’t see how this leads to any qualitative transition from working to not working such that it might explain why they work. That seems to have more to do with the human subjects overweighting noise and the ‘bet on sparsity’ principle.
If I’m not mistaken, a similar principle is at work in explaining why Random Forests / Extremely Randomized Trees empirically work so well on machine learning tasks (and why they also seem to be fairly robust to numerous irrelevant variables). They aren’t linear models in terms of the original variables, but if each tree is a new variable than the collection of trees is a linear model of equally weighted predictors.
Maybe. The explanation I’ve seen floated is that the tree methods are exploiting nearest-neighbor effects with adaptive distances; maybe that winds up being about the same thing.
then the irrelevant variables will be randomly assigned to +1 or −1 weights and will on average cancel out, leaving the signal from the relevant variables who do not cancel each other out.
This will seriously degrade the signal. Normally there are only a few key variables, so adding more random ones with similar will increase the amount of spurious results.
Adding in more irrelevant variables does change things quantitatively by lowering power due to increased variance and requiring more data, but I don’t see how this leads to any qualitative transition from working to not working such that it might explain why they work.
I don’t think this is true. All the useful weights are set to +1 or −1 by expert assessment, and the non-useful weights are just noise. Why would more data be required?
Yes, but again, where is the qualitative difference? In what sense does this explain the performance of improper linear models versus human experts? Why does the subtle difference between a model based on an ‘enriched’ set of variables and a model based on a non-enriched-but-slightly-worse ‘explain’ how they perform better than humans?
? I’m not sure what you’re asking for. The basic points are a) experts are bad integrating information, and b) experts are good at selecting important variables of roughly equal importance, c) these variables are often highly correlated.
a) explains why experts are bad (as in worse than proper linear models), b) and c) explain why improper linear models might perform not too far off proper linear models (and hence be better than experts).
Nice. To make your proposed explanation more precise:
Take a random vector on the n-dim unit sphere. Project to the nearest (+1,-1)/sqrt(n) vector; what is the expected l2-distance / angle? How does it scale with n?
If this value decreases in n, then your explanation is essentially correct, or did you want to propose something else?
Start by taking a random vector x where each coordinate is unit gaussian (normalize later). The projection px just splits into positive coordinates and negative coordinates.
We are interested in E[ / |x| sqrt(n)].
If the dimension is large enough, then we wont really need to normalize; it is enough to start with 1/sqrt(n) gaussians, as we will almost almost surely get almost unit length. Then all components are independent.
For the angle, we then (approximately) need to compute E(sum_i |x_i| / n), where each x_i is unit Gaussian. This is asymptotically independent of n; so it appears like this explanation of improper linear models fails.
Darn, after reading your comment I mistakenly believed that this would be yet another case of “obvious from high-dimensional geometry” / random projection.
PS. In what sense are improper linear models working? l_1, l2, l\infty sense?
Edit: I was being stupid, leaving the above for future ridicule. We want E(sum_i |x_i| / n)=1, not E(sum_i |x_i|/n)=0.
Folded Gaussian tells us that E[ sum_i |x_i|/n]= sqrt(2/pi), for large n. The explanation still does not work, since 2/pi <1, and this gives us the expected error margin of improper high-dimensional models.
@Stuart: What are the typical empirical errors? Do they happen to be near sqrt(2/pi), which is close enough to 1 to be summarized as “kinda works”?
I don’t think this is why improper linear models work. If you have a large number of variables, most of which are irrelevant in the sense of being uncorrelated with the outcome, then the irrelevant variables will be randomly assigned to +1 or −1 weights and will on average cancel out, leaving the signal from the relevant variables who do not cancel each other out.
So even without an implicit prior from an expert relevance selection effect or any explicit prior enforcing sparsity, you would still get good performance from improper linear models. (And IIRC, when you use something like ridge regression or Laplacian priors, the typical result, especially in high-dimensional settings like genomics or biology, most of the variables do drop out or get set to zero, so even in these ‘enriched’ datasets, most of the variables are irrelevant. What’s sauce for the goose is sauce for the gander.)
Adding in more irrelevant variables does change things quantitatively by lowering power due to increased variance and requiring more data, but I don’t see how this leads to any qualitative transition from working to not working such that it might explain why they work. That seems to have more to do with the human subjects overweighting noise and the ‘bet on sparsity’ principle.
If I’m not mistaken, a similar principle is at work in explaining why Random Forests / Extremely Randomized Trees empirically work so well on machine learning tasks (and why they also seem to be fairly robust to numerous irrelevant variables). They aren’t linear models in terms of the original variables, but if each tree is a new variable than the collection of trees is a linear model of equally weighted predictors.
Maybe. The explanation I’ve seen floated is that the tree methods are exploiting nearest-neighbor effects with adaptive distances; maybe that winds up being about the same thing.
This will seriously degrade the signal. Normally there are only a few key variables, so adding more random ones with similar will increase the amount of spurious results.
ie making the model worse.
I don’t think this is true. All the useful weights are set to +1 or −1 by expert assessment, and the non-useful weights are just noise. Why would more data be required?
Yes, but again, where is the qualitative difference? In what sense does this explain the performance of improper linear models versus human experts? Why does the subtle difference between a model based on an ‘enriched’ set of variables and a model based on a non-enriched-but-slightly-worse ‘explain’ how they perform better than humans?
? I’m not sure what you’re asking for. The basic points are a) experts are bad integrating information, and b) experts are good at selecting important variables of roughly equal importance, c) these variables are often highly correlated.
a) explains why experts are bad (as in worse than proper linear models), b) and c) explain why improper linear models might perform not too far off proper linear models (and hence be better than experts).
Nice. To make your proposed explanation more precise:
Take a random vector on the n-dim unit sphere. Project to the nearest (+1,-1)/sqrt(n) vector; what is the expected l2-distance / angle? How does it scale with n?
If this value decreases in n, then your explanation is essentially correct, or did you want to propose something else?
Start by taking a random vector x where each coordinate is unit gaussian (normalize later). The projection px just splits into positive coordinates and negative coordinates.
We are interested in E[ / |x| sqrt(n)].
If the dimension is large enough, then we wont really need to normalize; it is enough to start with 1/sqrt(n) gaussians, as we will almost almost surely get almost unit length. Then all components are independent.
For the angle, we then (approximately) need to compute E(sum_i |x_i| / n), where each x_i is unit Gaussian. This is asymptotically independent of n; so it appears like this explanation of improper linear models fails.
Darn, after reading your comment I mistakenly believed that this would be yet another case of “obvious from high-dimensional geometry” / random projection.
PS. In what sense are improper linear models working? l_1, l2, l\infty sense?
Edit: I was being stupid, leaving the above for future ridicule. We want E(sum_i |x_i| / n)=1, not E(sum_i |x_i|/n)=0.
Folded Gaussian tells us that E[ sum_i |x_i|/n]= sqrt(2/pi), for large n. The explanation still does not work, since 2/pi <1, and this gives us the expected error margin of improper high-dimensional models.
@Stuart: What are the typical empirical errors? Do they happen to be near sqrt(2/pi), which is close enough to 1 to be summarized as “kinda works”?