I think we can be assured that this optimizer will do very very well on completely new optimization problems.
Even if the optimizer may perform arbitrarily better given more time on certain infinite sets of algorithms, this does not mean it can perform arbitrarily better on any set of algorithms given more time; such an optimizer would be impossible to construct.
That’s not to say that you couldn’t build an optimizer that could solve all practical problems but that is as jacobt puts it a “really hard problem”.
Ok, we do have to make the training set somewhat similar to the kind of problems the optimizer will encounter in the future. But if we have enough variety in the training set, then the only way to score well should be to use very general optimization techniques. It is not meant to work on “any set of algorithms”; it’s specialized for real-world practical problems, which should be good enough.
Even if the optimizer may perform arbitrarily better given more time on certain infinite sets of algorithms, this does not mean it can perform arbitrarily better on any set of algorithms given more time; such an optimizer would be impossible to construct.
That’s not to say that you couldn’t build an optimizer that could solve all practical problems but that is as jacobt puts it a “really hard problem”.
Ok, we do have to make the training set somewhat similar to the kind of problems the optimizer will encounter in the future. But if we have enough variety in the training set, then the only way to score well should be to use very general optimization techniques. It is not meant to work on “any set of algorithms”; it’s specialized for real-world practical problems, which should be good enough.