In deep learning, Goodfellow writes in response to NFL
The philosophy of deep learning in general … is that a wide range of tasks (such as all the intellectual tasks people can do) may all be solved effectively using very general-purpose forms of regularization
Where regularization is changes made to an algorithm to reduce generalization error but not training error, and are basically one type of assumption made about the task to be learned.
Good find! Yeah, this is a good explanation for learning, and the NFL razor does not discard it. I think that almost no deep learning professor believes the bad explanation that “deep learning works because NNs are universal approximators”. But it’s more common with students and non-experts (I believed it for a while!)
In deep learning, Goodfellow writes in response to NFL
Where regularization is changes made to an algorithm to reduce generalization error but not training error, and are basically one type of assumption made about the task to be learned.
Good find! Yeah, this is a good explanation for learning, and the NFL razor does not discard it. I think that almost no deep learning professor believes the bad explanation that “deep learning works because NNs are universal approximators”. But it’s more common with students and non-experts (I believed it for a while!)