I would agree that a search process in which the cost of evaluation goes to infinity becomes purely a control process: you can’t perform any filtering of possibilities based on evaluation, so, you have to output one possibility and try to make it a good one (with no guarantees).
This is backwards, actually. “Control” isn’t the crummy option you have to resort to when you can’t afford to search. Searching is what you have to resort to when you can’t do control theory.
When your Jacuzzi is at 60f and you want it at 102f, there are a lot of possible heating profiles you could try out. However, you know that no combination of “on off off on on off off on” is going to surprise you by giving a better result than simply leaving the heater on when it’s too cold and off when it’s too hot. Control theory actually can guarantee the optimal results, and with some simple assumptions it’s exactly what it seems like it’d be. Guided missiles do get more complicated than this with all the inertias and significant measurement noise and moving target and all that, but the principle remains the same: compute the best estimate of where you stand relative to the trajectory you want to be on (where “trajectory” includes things like the angular rates of your control surfaces), and then steer your trajectory towards that. There’s just nothing left to search for when you already know the best thing to do.
The reason we ever need to search is because it’s not always obvious when our actions are bringing us towards or away from our desired trajectory. “Searching” is performing trial and error by simulating forward in time until you realize “nope, this leads to a bad outcome” and backing up to before you “made” the mistake and trying something else. For example if you’re trying to cook a meal you might have to get all the way to the finished product before you realized that you started out with too much of one of your ingredients. However, this is a result of not knowing the composition you’re looking for and how your inputs affect it. Once you understand the objective, the process and actuators, and how things project into the future, you know your best guess of where to go at each step. If the water is too cold, you simply turn the heater on.
Searching, then, isn’t just something we do when projecting forward and evaluating outcomes is cheap. It’s what we do when analyzing the problem and building an understanding of how our inputs affect our trajectories (i.e. control theory) is expensive. Or difficult, or impossible.
Or perhaps better put, searching is for when we haven’t yet found what we want and how to get there. Control systems are what we implement once we know.
I agree with most of what you say here, but I think you’re over-emphasizing the idea that search deals with unknowns whereas control deals with knows. Optimization via search works best when you have a good model of the situation. The extreme case for usefulness of search is a game like Chess, where the rules are perfectly known, there’s no randomness, and no hidden information. If you don’t know a lot about a situation, you can’t build an optimal controller, but you also can’t set up a very good representation of the problem to solve via search.
This is backwards, actually. “Control” isn’t the crummy option you have to resort to when you can’t afford to search. Searching is what you have to resort to when you can’t do control theory.
Why not both? Most of your post is describing situations where you can’t easily solve a control problem with a direct rule, so you spin up a search based on a model of the situation. My paragraph which you quoted was describing a situation where dumb search becomes harder and harder, so you spin up a controller (inside the search process) to help out. Both of these things happen.
I agree with most of what you say here, but I think you’re over-emphasizing the idea that search deals with unknowns whereas control deals with knows.
There’s uncertainty in both approaches, but it is dealt with differently. In controls, you’ll often use kalman filters to estimate the relevant states. You might not know your exact state because there is noise on all your sensors, and you may have uncertainty in your estimate of the amount of noise, but given your best estimates of the variance, you can calculate the one best estimate of your actual state.
There’s still nothing to search for in the sense of “using our model of our system, try different kalman filter gains and see what works best”, because the math already answered that for you definitively. If you’re searching in the real world (i.e. actually trying different gains and seeing what works best), that can help, but only because you’re getting more information about what your noise distributions are actually like. You can also just measure that directly and then do the math.
With search over purely simulated outcomes, you’re saying essentially “I have uncertainty over how to do the math”, while in control theory you’re essentially saying “I don’t”.
Perhaps a useful analogy would be that of numerical integration vs symbolic integration. You can brute force a decent enough approximation of any integral just by drawing a bunch of little trapezoids and summing them up, and a smart highschooler can write the program to do it. Symbolic integration is much “harder”, but can often give exact solutions and isn’t so hard to compute once you know how to do it.
Why not both?
You can do both. I’m not trying to argue that doing the math and calculating the optimal answer is always the right thing to do (or even feasible/possible).
In the real world, I often do sorta “search through gains” instead of trying to get my analysis perfect or model my meta-uncertainty. Just yesterday, for example, we had some overshoot on the linear actuator we’re working on. Trying to do the math would have been extremely tedious and I likely would have messed it up anyway, but it took about two minutes to just change the values and try it until it worked well. It’s worth noting that “searching” by actually doing experiments is different than “searching” by running simulations, but the latter can make sense too—if engineer time doing control theory is expensive, laptop time running simulations is cheap, and the latter can substitute for the former to some degree.
The point I was making was that the optimal solution is still going to be what control theory says, so if it’s important to you to have the rightest answer with the fewest mistakes, you move away from searching and towards the control theory textbook—not the other way around.
Most of your post is describing situations where you can’t easily solve a control problem with a direct rule, so you spin up a search based on a model of the situation.
This is backwards, actually. “Control” isn’t the crummy option you have to resort to when you can’t afford to search. Searching is what you have to resort to when you can’t do control theory.
When your Jacuzzi is at 60f and you want it at 102f, there are a lot of possible heating profiles you could try out. However, you know that no combination of “on off off on on off off on” is going to surprise you by giving a better result than simply leaving the heater on when it’s too cold and off when it’s too hot. Control theory actually can guarantee the optimal results, and with some simple assumptions it’s exactly what it seems like it’d be. Guided missiles do get more complicated than this with all the inertias and significant measurement noise and moving target and all that, but the principle remains the same: compute the best estimate of where you stand relative to the trajectory you want to be on (where “trajectory” includes things like the angular rates of your control surfaces), and then steer your trajectory towards that. There’s just nothing left to search for when you already know the best thing to do.
The reason we ever need to search is because it’s not always obvious when our actions are bringing us towards or away from our desired trajectory. “Searching” is performing trial and error by simulating forward in time until you realize “nope, this leads to a bad outcome” and backing up to before you “made” the mistake and trying something else. For example if you’re trying to cook a meal you might have to get all the way to the finished product before you realized that you started out with too much of one of your ingredients. However, this is a result of not knowing the composition you’re looking for and how your inputs affect it. Once you understand the objective, the process and actuators, and how things project into the future, you know your best guess of where to go at each step. If the water is too cold, you simply turn the heater on.
Searching, then, isn’t just something we do when projecting forward and evaluating outcomes is cheap. It’s what we do when analyzing the problem and building an understanding of how our inputs affect our trajectories (i.e. control theory) is expensive. Or difficult, or impossible.
Or perhaps better put, searching is for when we haven’t yet found what we want and how to get there. Control systems are what we implement once we know.
I agree with most of what you say here, but I think you’re over-emphasizing the idea that search deals with unknowns whereas control deals with knows. Optimization via search works best when you have a good model of the situation. The extreme case for usefulness of search is a game like Chess, where the rules are perfectly known, there’s no randomness, and no hidden information. If you don’t know a lot about a situation, you can’t build an optimal controller, but you also can’t set up a very good representation of the problem to solve via search.
Why not both? Most of your post is describing situations where you can’t easily solve a control problem with a direct rule, so you spin up a search based on a model of the situation. My paragraph which you quoted was describing a situation where dumb search becomes harder and harder, so you spin up a controller (inside the search process) to help out. Both of these things happen.
There’s uncertainty in both approaches, but it is dealt with differently. In controls, you’ll often use kalman filters to estimate the relevant states. You might not know your exact state because there is noise on all your sensors, and you may have uncertainty in your estimate of the amount of noise, but given your best estimates of the variance, you can calculate the one best estimate of your actual state.
There’s still nothing to search for in the sense of “using our model of our system, try different kalman filter gains and see what works best”, because the math already answered that for you definitively. If you’re searching in the real world (i.e. actually trying different gains and seeing what works best), that can help, but only because you’re getting more information about what your noise distributions are actually like. You can also just measure that directly and then do the math.
With search over purely simulated outcomes, you’re saying essentially “I have uncertainty over how to do the math”, while in control theory you’re essentially saying “I don’t”.
Perhaps a useful analogy would be that of numerical integration vs symbolic integration. You can brute force a decent enough approximation of any integral just by drawing a bunch of little trapezoids and summing them up, and a smart highschooler can write the program to do it. Symbolic integration is much “harder”, but can often give exact solutions and isn’t so hard to compute once you know how to do it.
You can do both. I’m not trying to argue that doing the math and calculating the optimal answer is always the right thing to do (or even feasible/possible).
In the real world, I often do sorta “search through gains” instead of trying to get my analysis perfect or model my meta-uncertainty. Just yesterday, for example, we had some overshoot on the linear actuator we’re working on. Trying to do the math would have been extremely tedious and I likely would have messed it up anyway, but it took about two minutes to just change the values and try it until it worked well. It’s worth noting that “searching” by actually doing experiments is different than “searching” by running simulations, but the latter can make sense too—if engineer time doing control theory is expensive, laptop time running simulations is cheap, and the latter can substitute for the former to some degree.
The point I was making was that the optimal solution is still going to be what control theory says, so if it’s important to you to have the rightest answer with the fewest mistakes, you move away from searching and towards the control theory textbook—not the other way around.
I don’t follow this part.