I don’t think it’s actually true that “we” talk about programs always optimizing some utility function. Many programs don’t. (Well, I guess you can describe pretty much anything in terms of optimizing a sufficiently artificially-defined utility function, but that’s not a helpful thing to do.)
“Optimizing a utility function” seems like a pretty good approximate description of what anything acting with purpose in the world is doing.
But but
Nothing made out of actual physical matter in the actual physical universe is going to be a perfectly rational agent in the relevant sense.
Human beings definitely don’t behave exactly as if optimizing well-defined utility functions.
It’s easy to envisage nightmare scenarios, should some AI gain a great deal of power, where the AI is singlemindedly optimizing some utility function that produces very bad results when something very powerful optimizes it.
Today’s most impressive AI systems aren’t (so far as we know) trying to optimize anything. (You can loosely describe what they do as “trying to maximize the plausibility of the text being generated”, but there isn’t actually any optimization process going on when the system is running, only when it’s being trained.)
But but but
Some people worry that an AI system that isn’t overtly trying to optimize anything might have things in its inner workings that effectively are performing optimization processes, which could be bad on account of those nightmare scenarios. (Especially as in such a case the optimization target will not have been carefully designed by anyone, it’ll just be something whose optimization produced good results during the training process.)
Anyway: I don’t think anyone thinks it’s helpful to think of all programs as optimizing anything. But some programs, particularly ones that are in some sense trying to get things done in a complicated world, might helpfully be thought of that way, either because they literally are optimizing something or because they’re doing something like optimizing something.
I don’t think it’s actually true that “we” talk about programs always optimizing some utility function. Many programs don’t. (Well, I guess you can describe pretty much anything in terms of optimizing a sufficiently artificially-defined utility function, but that’s not a helpful thing to do.)
But
There are theorems that kinda-sorta say that perfectly rational agents have to have some utility function they’re optimizing the expectation of: https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem.
“Optimizing a utility function” seems like a pretty good approximate description of what anything acting with purpose in the world is doing.
But but
Nothing made out of actual physical matter in the actual physical universe is going to be a perfectly rational agent in the relevant sense.
Human beings definitely don’t behave exactly as if optimizing well-defined utility functions.
It’s easy to envisage nightmare scenarios, should some AI gain a great deal of power, where the AI is singlemindedly optimizing some utility function that produces very bad results when something very powerful optimizes it.
Today’s most impressive AI systems aren’t (so far as we know) trying to optimize anything. (You can loosely describe what they do as “trying to maximize the plausibility of the text being generated”, but there isn’t actually any optimization process going on when the system is running, only when it’s being trained.)
But but but
Some people worry that an AI system that isn’t overtly trying to optimize anything might have things in its inner workings that effectively are performing optimization processes, which could be bad on account of those nightmare scenarios. (Especially as in such a case the optimization target will not have been carefully designed by anyone, it’ll just be something whose optimization produced good results during the training process.)
Anyway: I don’t think anyone thinks it’s helpful to think of all programs as optimizing anything. But some programs, particularly ones that are in some sense trying to get things done in a complicated world, might helpfully be thought of that way, either because they literally are optimizing something or because they’re doing something like optimizing something.