Got it, that’s the case I was thinking of as “redrawing the system boundary”. Makes sense.
That still leaves the problem that we can write an (internal) optimizer which isn’t iterative. For instance, a convex function optimizer which differentiates its input function and then algebraically solves for zero gradient. (In the real world, this is similar to what markets do.) This was also my main complaint on Flint’s notion of “optimization”: not all optimizers are iterative, and sometimes they don’t even have an “initial” point against which we could compare.
I’m a bit confused: why can’t I just take the initial state of the program (or of the physical system representing the computer) as the initial point in configuration space for your example? The execution of your program is still a trajectory through the configuration space of your computer.
Personally, my biggest issue with optimizing systems is that I don’t know what the “smaller” concerning the target space really means. If the target space has only one state less than the total configuration space, is this still an optimizing system? Should we compute a ratio of measure between target and total configuration space to have some sort of optimizing power?
The initial state of the program/physical computer may not overlap with the target space at all. The target space wouldn’t be larger or smaller (in the sense of subsets); it would just be an entirely different set of states.
Flint’s notion of optimization, as I understand it, requires that we can view the target space as a subset of the initial space.
Got it, that’s the case I was thinking of as “redrawing the system boundary”. Makes sense.
That still leaves the problem that we can write an (internal) optimizer which isn’t iterative. For instance, a convex function optimizer which differentiates its input function and then algebraically solves for zero gradient. (In the real world, this is similar to what markets do.) This was also my main complaint on Flint’s notion of “optimization”: not all optimizers are iterative, and sometimes they don’t even have an “initial” point against which we could compare.
I’m a bit confused: why can’t I just take the initial state of the program (or of the physical system representing the computer) as the initial point in configuration space for your example? The execution of your program is still a trajectory through the configuration space of your computer.
Personally, my biggest issue with optimizing systems is that I don’t know what the “smaller” concerning the target space really means. If the target space has only one state less than the total configuration space, is this still an optimizing system? Should we compute a ratio of measure between target and total configuration space to have some sort of optimizing power?
The initial state of the program/physical computer may not overlap with the target space at all. The target space wouldn’t be larger or smaller (in the sense of subsets); it would just be an entirely different set of states.
Flint’s notion of optimization, as I understand it, requires that we can view the target space as a subset of the initial space.