But how is an SAT solver an optimizer? There’s not an implied goal as far as I can tell.
There’s a bunch of things you want to fit in your backpack, but only so much space. Fortunately, you’re carrying these things to sell, and you work in a business where you’re guaranteed to sell everything you bring with you, and the prices are fixed. You write a program with the same goal as you—finding the combination which yields maximum profit.
Thanks Pattern! I do see now how it could be used as an optimizer, but it still seems like it’s not intrinsically an optimizer (like how linear algebra isn’t an optimizer but it could be used as an optimizer).
I don’t think anyone actually claimed they were all intrinsically optimizers, but I was still confused by it.
The way SAT solvers work is by trying to assign the maximum number of variables without conflict. For a finite number of constraints, the max is at most all of them, so it stops. (If the people who write it assume there’s a solution, then they probably don’t include code to keep the (current) “best” solution around.)
As Joar Skalse noted below, this could be considered intrinsic optimization. (Maybe if you wrote an SAT solver which consisted solely of trying random assignments for all variables at once until it happened to find one that satisfied all constraints it wouldn’t be an optimizer, absent additional code to retain the best solution so far.)
In contrast, I haven’t seen any work on this* - optimization is always described for finding solutions which fits all constraints. (The closest to an exception is Lagrange multipliers—that method produces a set of candidate solutions which includes the maximum/minimum, and then one has to check all of them to find which one that is. But all candidates satisfy the other constraints.)
The way it’s described would have one think it maximizing a utility function—which only returns 0 or 1. But the way the solution works is by assembling pieces towards that end—navigating the space of possible solutions in a fashion that will explore/eliminate every possibility before giving up.
*Which really stuck out once I had a problem in that form. I’m still working on it. Maybe work on chess AI includes this because the problem can’t be fully solved, and that’s what it takes to get people to give up on “optimal” solutions which take forever to find.
There’s a bunch of things you want to fit in your backpack, but only so much space. Fortunately, you’re carrying these things to sell, and you work in a business where you’re guaranteed to sell everything you bring with you, and the prices are fixed. You write a program with the same goal as you—finding the combination which yields maximum profit.
Thanks Pattern! I do see now how it could be used as an optimizer, but it still seems like it’s not intrinsically an optimizer (like how linear algebra isn’t an optimizer but it could be used as an optimizer).
I don’t think anyone actually claimed they were all intrinsically optimizers, but I was still confused by it.
The way SAT solvers work is by trying to assign the maximum number of variables without conflict. For a finite number of constraints, the max is at most all of them, so it stops. (If the people who write it assume there’s a solution, then they probably don’t include code to keep the (current) “best” solution around.)
As Joar Skalse noted below, this could be considered intrinsic optimization. (Maybe if you wrote an SAT solver which consisted solely of trying random assignments for all variables at once until it happened to find one that satisfied all constraints it wouldn’t be an optimizer, absent additional code to retain the best solution so far.)
In contrast, I haven’t seen any work on this* - optimization is always described for finding solutions which fits all constraints. (The closest to an exception is Lagrange multipliers—that method produces a set of candidate solutions which includes the maximum/minimum, and then one has to check all of them to find which one that is. But all candidates satisfy the other constraints.)
The way it’s described would have one think it maximizing a utility function—which only returns 0 or 1. But the way the solution works is by assembling pieces towards that end—navigating the space of possible solutions in a fashion that will explore/eliminate every possibility before giving up.
*Which really stuck out once I had a problem in that form. I’m still working on it. Maybe work on chess AI includes this because the problem can’t be fully solved, and that’s what it takes to get people to give up on “optimal” solutions which take forever to find.