Optimization is complicated. Much of the AI behavior and safety research is exactly this topic: how to optimize only when it’s actually beneficial. Which comes down to how to define optimal.
Satisficing is just recognizing that there are points outside the range of your optimization plan. You have “enough” of whatever it is you’re measuring, so marginal increases are worth less, and you should look at other resources/experiences/parameters to optimize instead of that one. In other words, re-calculate your optimization plan to figure out what’s important now.
It’s not that you stop optimizing—you _MUST BE_ optimizing something, if you make decisions. It’s that you downweight one component that you’ve been focusing on, in order to upweight other things in your overall optimization. “An optimizer is exactly the same as a satisficer below it’s satisfaction level” is an instructive phrase.
I think a great disservice is done if you talk about optimizing utility directly, rather than resources or states of the universe. A rational agent always optimizes utility, that’s the very definition of utility. There are many cases where optimizing some set of resources (slack, money, other humans saved or subjugated, whatever) perfectly optimizes utility. Satisficing is the recognition that those utility curves (really, resource-utility relationships) have inflection points where a different set of resources needs to be optimized in order to maximize utility.
My take:
Optimization is complicated. Much of the AI behavior and safety research is exactly this topic: how to optimize only when it’s actually beneficial. Which comes down to how to define optimal.
Satisficing is just recognizing that there are points outside the range of your optimization plan. You have “enough” of whatever it is you’re measuring, so marginal increases are worth less, and you should look at other resources/experiences/parameters to optimize instead of that one. In other words, re-calculate your optimization plan to figure out what’s important now.
It’s not that you stop optimizing—you _MUST BE_ optimizing something, if you make decisions. It’s that you downweight one component that you’ve been focusing on, in order to upweight other things in your overall optimization. “An optimizer is exactly the same as a satisficer below it’s satisfaction level” is an instructive phrase.
Stuart Armstrong’s https://www.lesswrong.com/posts/tb9KnFvPEoFSkQTX2/satisficers-undefined-behaviour shows this truth.
I think a great disservice is done if you talk about optimizing utility directly, rather than resources or states of the universe. A rational agent always optimizes utility, that’s the very definition of utility. There are many cases where optimizing some set of resources (slack, money, other humans saved or subjugated, whatever) perfectly optimizes utility. Satisficing is the recognition that those utility curves (really, resource-utility relationships) have inflection points where a different set of resources needs to be optimized in order to maximize utility.