I think that you are seeing a tradeoff by only looking at cases where both tecniques are comparably good. No one makes a calculator by trying random assemblages of transistors and seeing what works. Here the gears level insight is just much easier. When there are multiple approaches, and you rule out the cases where one is obviously much better, you see a trade-off in the remaining cases. Expect there to be some cases where one technique is just worse.
I agree with the principle here, but I think the two are competitive in practice far more often than one would naively expect. For instance, people do use black-box optimizers for designing arithmetic logic units (ALUs), the core component of a calculator. Indeed, circuit optimizers are a core tool for digital hardware design these days (see e.g. espresso for a relatively simple one) - and of course there’s a whole academic subfield devoted to the topic.
Competitiveness of the two methods comes from hybrid approaches. If evolution can solve a problem, then we can study the evolved solution to come up with a competitive gears-level model. If a gears-level approach can solve a problem, then we can initialize an iterative optimizer with the gears-level solution and let it run (which is what circuit designers do).
I think it is true that gears-level models are systematically undervalued, and that part of the reason is because of the longer payoff curve.
A simple example is debugging code: a gears-level approach is to try and understand what the code is doing and why it doesn’t do what you want, a black-box approach is to try changing things somewhat randomly. Most programmers I know will agree that the gears-level approach is almost always better, but that they at least sometimes end up doing the black-box approach when tired/frustrated/stuck.
And in companies that focus too much on short-term results (most of them, IMO) will push programmers to spend far too much time on black-box debugging than is optimal.
Perhaps part of the reason why the choice appears to typically be obvious is that gears methods are underestimated.
A simple example is debugging code: a gears-level approach is to try and understand what the code is doing and why it doesn’t do what you want, a black-box approach is to try changing things somewhat randomly.
To drill in further, a great way to build a model of why a defect arises is using the scientific method. You generate some hypothesis about the behavior of your program (if X is true, then Y) and then test your hypothesis. If the results of your test invalidate the hypothesis, you’ve learned something about your code and where not to look. If your hypothesis is confirmed, you may be able to resolve your issue, or at least refine your hypothesis in the right direction.
I think that you are seeing a tradeoff by only looking at cases where both tecniques are comparably good. No one makes a calculator by trying random assemblages of transistors and seeing what works. Here the gears level insight is just much easier. When there are multiple approaches, and you rule out the cases where one is obviously much better, you see a trade-off in the remaining cases. Expect there to be some cases where one technique is just worse.
I agree with the principle here, but I think the two are competitive in practice far more often than one would naively expect. For instance, people do use black-box optimizers for designing arithmetic logic units (ALUs), the core component of a calculator. Indeed, circuit optimizers are a core tool for digital hardware design these days (see e.g. espresso for a relatively simple one) - and of course there’s a whole academic subfield devoted to the topic.
Competitiveness of the two methods comes from hybrid approaches. If evolution can solve a problem, then we can study the evolved solution to come up with a competitive gears-level model. If a gears-level approach can solve a problem, then we can initialize an iterative optimizer with the gears-level solution and let it run (which is what circuit designers do).
I think it is true that gears-level models are systematically undervalued, and that part of the reason is because of the longer payoff curve.
A simple example is debugging code: a gears-level approach is to try and understand what the code is doing and why it doesn’t do what you want, a black-box approach is to try changing things somewhat randomly. Most programmers I know will agree that the gears-level approach is almost always better, but that they at least sometimes end up doing the black-box approach when tired/frustrated/stuck.
And in companies that focus too much on short-term results (most of them, IMO) will push programmers to spend far too much time on black-box debugging than is optimal.
Perhaps part of the reason why the choice appears to typically be obvious is that gears methods are underestimated.
To drill in further, a great way to build a model of why a defect arises is using the scientific method. You generate some hypothesis about the behavior of your program (if X is true, then Y) and then test your hypothesis. If the results of your test invalidate the hypothesis, you’ve learned something about your code and where not to look. If your hypothesis is confirmed, you may be able to resolve your issue, or at least refine your hypothesis in the right direction.