The No Free Lunch Theorem says “that any two optimization algorithms are equivalent when their performance is averaged across all possible problems.”
So if the class of target functions (=the set of possible problems you would want to solve) is very large, then it’s harder for a random model class (=set of solutions) to do much better than any other model class. You can’t obtain strong guarantees for why you should expect good approximation.
If the target function class is smaller and your model class is big enough you might have better luck.
The No Free Lunch Theorem says “that any two optimization algorithms are equivalent when their performance is averaged across all possible problems.”
So if the class of target functions (=the set of possible problems you would want to solve) is very large, then it’s harder for a random model class (=set of solutions) to do much better than any other model class. You can’t obtain strong guarantees for why you should expect good approximation.
If the target function class is smaller and your model class is big enough you might have better luck.