It says that if the L0-pseudonorm solution has an error epsilon, then the L1-norm solution has error up to C*epsilon, for some positive C. In the exact case, epsilon is zero, hence the two solutions are equivalent.
Also Jacob originally specified that the coefficients were drawn from a Gaussian and nobody seems to be using that fact.
You don’t really need the fact for the exact case. In the inexact case, you can use it in the form of an additional L2-norm regularization.
Note that in the inexact case (i.e. observation error) this model (the Lasso) fits comfortably in a Bayesian framework. (Double exponential prior on u.) Leon already made this point below and jsteinhardt replied
Yes, but if I understand correctly it occurs with probability 1 for many classes of probability distributions (including this one, I think).
It says that if the L0-pseudonorm solution has an error epsilon, then the L1-norm solution has error up to C*epsilon, for some positive C. In the exact case, epsilon is zero, hence the two solutions are equivalent.
You don’t really need the fact for the exact case. In the inexact case, you can use it in the form of an additional L2-norm regularization.
Note that in the inexact case (i.e. observation error) this model (the Lasso) fits comfortably in a Bayesian framework. (Double exponential prior on u.) Leon already made this point below and jsteinhardt replied