The way SAT solvers work is by trying to assign the maximum number of variables without conflict. For a finite number of constraints, the max is at most all of them, so it stops. (If the people who write it assume there’s a solution, then they probably don’t include code to keep the (current) “best” solution around.)
As Joar Skalse noted below, this could be considered intrinsic optimization. (Maybe if you wrote an SAT solver which consisted solely of trying random assignments for all variables at once until it happened to find one that satisfied all constraints it wouldn’t be an optimizer, absent additional code to retain the best solution so far.)
In contrast, I haven’t seen any work on this* - optimization is always described for finding solutions which fits all constraints. (The closest to an exception is Lagrange multipliers—that method produces a set of candidate solutions which includes the maximum/minimum, and then one has to check all of them to find which one that is. But all candidates satisfy the other constraints.)
The way it’s described would have one think it maximizing a utility function—which only returns 0 or 1. But the way the solution works is by assembling pieces towards that end—navigating the space of possible solutions in a fashion that will explore/eliminate every possibility before giving up.
*Which really stuck out once I had a problem in that form. I’m still working on it. Maybe work on chess AI includes this because the problem can’t be fully solved, and that’s what it takes to get people to give up on “optimal” solutions which take forever to find.
The way SAT solvers work is by trying to assign the maximum number of variables without conflict. For a finite number of constraints, the max is at most all of them, so it stops. (If the people who write it assume there’s a solution, then they probably don’t include code to keep the (current) “best” solution around.)
As Joar Skalse noted below, this could be considered intrinsic optimization. (Maybe if you wrote an SAT solver which consisted solely of trying random assignments for all variables at once until it happened to find one that satisfied all constraints it wouldn’t be an optimizer, absent additional code to retain the best solution so far.)
In contrast, I haven’t seen any work on this* - optimization is always described for finding solutions which fits all constraints. (The closest to an exception is Lagrange multipliers—that method produces a set of candidate solutions which includes the maximum/minimum, and then one has to check all of them to find which one that is. But all candidates satisfy the other constraints.)
The way it’s described would have one think it maximizing a utility function—which only returns 0 or 1. But the way the solution works is by assembling pieces towards that end—navigating the space of possible solutions in a fashion that will explore/eliminate every possibility before giving up.
*Which really stuck out once I had a problem in that form. I’m still working on it. Maybe work on chess AI includes this because the problem can’t be fully solved, and that’s what it takes to get people to give up on “optimal” solutions which take forever to find.