I agree with most of what you say here, but I think you’re over-emphasizing the idea that search deals with unknowns whereas control deals with knows. Optimization via search works best when you have a good model of the situation. The extreme case for usefulness of search is a game like Chess, where the rules are perfectly known, there’s no randomness, and no hidden information. If you don’t know a lot about a situation, you can’t build an optimal controller, but you also can’t set up a very good representation of the problem to solve via search.
This is backwards, actually. “Control” isn’t the crummy option you have to resort to when you can’t afford to search. Searching is what you have to resort to when you can’t do control theory.
Why not both? Most of your post is describing situations where you can’t easily solve a control problem with a direct rule, so you spin up a search based on a model of the situation. My paragraph which you quoted was describing a situation where dumb search becomes harder and harder, so you spin up a controller (inside the search process) to help out. Both of these things happen.
I agree with most of what you say here, but I think you’re over-emphasizing the idea that search deals with unknowns whereas control deals with knows.
There’s uncertainty in both approaches, but it is dealt with differently. In controls, you’ll often use kalman filters to estimate the relevant states. You might not know your exact state because there is noise on all your sensors, and you may have uncertainty in your estimate of the amount of noise, but given your best estimates of the variance, you can calculate the one best estimate of your actual state.
There’s still nothing to search for in the sense of “using our model of our system, try different kalman filter gains and see what works best”, because the math already answered that for you definitively. If you’re searching in the real world (i.e. actually trying different gains and seeing what works best), that can help, but only because you’re getting more information about what your noise distributions are actually like. You can also just measure that directly and then do the math.
With search over purely simulated outcomes, you’re saying essentially “I have uncertainty over how to do the math”, while in control theory you’re essentially saying “I don’t”.
Perhaps a useful analogy would be that of numerical integration vs symbolic integration. You can brute force a decent enough approximation of any integral just by drawing a bunch of little trapezoids and summing them up, and a smart highschooler can write the program to do it. Symbolic integration is much “harder”, but can often give exact solutions and isn’t so hard to compute once you know how to do it.
Why not both?
You can do both. I’m not trying to argue that doing the math and calculating the optimal answer is always the right thing to do (or even feasible/possible).
In the real world, I often do sorta “search through gains” instead of trying to get my analysis perfect or model my meta-uncertainty. Just yesterday, for example, we had some overshoot on the linear actuator we’re working on. Trying to do the math would have been extremely tedious and I likely would have messed it up anyway, but it took about two minutes to just change the values and try it until it worked well. It’s worth noting that “searching” by actually doing experiments is different than “searching” by running simulations, but the latter can make sense too—if engineer time doing control theory is expensive, laptop time running simulations is cheap, and the latter can substitute for the former to some degree.
The point I was making was that the optimal solution is still going to be what control theory says, so if it’s important to you to have the rightest answer with the fewest mistakes, you move away from searching and towards the control theory textbook—not the other way around.
Most of your post is describing situations where you can’t easily solve a control problem with a direct rule, so you spin up a search based on a model of the situation.
I agree with most of what you say here, but I think you’re over-emphasizing the idea that search deals with unknowns whereas control deals with knows. Optimization via search works best when you have a good model of the situation. The extreme case for usefulness of search is a game like Chess, where the rules are perfectly known, there’s no randomness, and no hidden information. If you don’t know a lot about a situation, you can’t build an optimal controller, but you also can’t set up a very good representation of the problem to solve via search.
Why not both? Most of your post is describing situations where you can’t easily solve a control problem with a direct rule, so you spin up a search based on a model of the situation. My paragraph which you quoted was describing a situation where dumb search becomes harder and harder, so you spin up a controller (inside the search process) to help out. Both of these things happen.
There’s uncertainty in both approaches, but it is dealt with differently. In controls, you’ll often use kalman filters to estimate the relevant states. You might not know your exact state because there is noise on all your sensors, and you may have uncertainty in your estimate of the amount of noise, but given your best estimates of the variance, you can calculate the one best estimate of your actual state.
There’s still nothing to search for in the sense of “using our model of our system, try different kalman filter gains and see what works best”, because the math already answered that for you definitively. If you’re searching in the real world (i.e. actually trying different gains and seeing what works best), that can help, but only because you’re getting more information about what your noise distributions are actually like. You can also just measure that directly and then do the math.
With search over purely simulated outcomes, you’re saying essentially “I have uncertainty over how to do the math”, while in control theory you’re essentially saying “I don’t”.
Perhaps a useful analogy would be that of numerical integration vs symbolic integration. You can brute force a decent enough approximation of any integral just by drawing a bunch of little trapezoids and summing them up, and a smart highschooler can write the program to do it. Symbolic integration is much “harder”, but can often give exact solutions and isn’t so hard to compute once you know how to do it.
You can do both. I’m not trying to argue that doing the math and calculating the optimal answer is always the right thing to do (or even feasible/possible).
In the real world, I often do sorta “search through gains” instead of trying to get my analysis perfect or model my meta-uncertainty. Just yesterday, for example, we had some overshoot on the linear actuator we’re working on. Trying to do the math would have been extremely tedious and I likely would have messed it up anyway, but it took about two minutes to just change the values and try it until it worked well. It’s worth noting that “searching” by actually doing experiments is different than “searching” by running simulations, but the latter can make sense too—if engineer time doing control theory is expensive, laptop time running simulations is cheap, and the latter can substitute for the former to some degree.
The point I was making was that the optimal solution is still going to be what control theory says, so if it’s important to you to have the rightest answer with the fewest mistakes, you move away from searching and towards the control theory textbook—not the other way around.
I don’t follow this part.