This model is explicitly requiring that you deal only with physical processes, so your convex function solver would require time to get from the starting state to the end state. If it is happening non-iteratively then it would cease to be an optimizing system after it has completed the function, since there is no longer a target configuration.
I’m not sure what you’re trying to say here. What’s the state space (in which both the start and end state of the optimizer live), what’s the basin of attraction (i.e. set of allowed initial conditions), and what’s the target region within the state space? And remember, the target region needs to be a subset of the allowed initial conditions.
This end state state is the solution to the convex function being stored in some physical registers. The initial state is those registers containing arbitrary data to be overwritten. It’s not particularly interesting as optimization problems go (not a very large basin of attraction) but it fulfills the basic criteria.
The unique thing about your example is that it solves once and then it is done (relative to the examples in the post), so it ceases to be an optimizing system once it finishes computing the solution to your convex function.
With a slight modification, you could be repeating this algorithm in a loop so it constantly recalculates a new function. Now the initial state can be some value in the result and input registers, and the target region is the set of input equations and appropriate solution in the output registers. It widens the basin of attraction to both the input and output registers rather than just the output.
There’s no reason why that target set would be smaller than the basin of attraction. Given one such optimization problem, there are no obvious perturbations we could make which would leave the result in the target region.
The target region is not a subset of the basin of attraction. The system doesn’t evolve from a larger region to a smaller subset (as in the Venn-diagram visuals in the OP), it just evolves from one set to another.
The first problem explicitly violates the OP’s definition of an optimizer, and the second problem violates one of the unspoken assumptions present in all of the OP’s examples.
I don’t believe that either of these points are true. In your original example, there is one correct solution for any convex function. I will assume there is a single hard-coded function for the following, but it can be extended to work for an arbitrary function.
The output register having the correct solution is the target set.
The output register having any state is the basin of attraction.
Clearly any specific number (or rather singleton of that number) is a subset of all numbers, so the target is a subset of the basin. And further, because “all numbers” has more than one element, the target set is smaller than the basin.
Pretty much, yes, according to definition given. Like I said, not a particularly interesting optimization but an optimization none the less.
To extend on this, the basin of optimization is not any smaller than an iterative process acting on a single register (and if you loop the program, then the time horizon is the same). In both cases your basin is anything in that register and the target state is one particular number in that register. As far as I can tell the definition doesn’t have any way of saying that one is “more of an optimizer” than the other. If anything, the fixed output is more optimized because it arrives more quickly.
Ok, well, it seems like the one-shot non-iterative optimizer is an optimizer in a MUCH stronger sense than a random program, and I’d still expect a definition of optimization to say something about the sense in which that holds.
This model is explicitly requiring that you deal only with physical processes, so your convex function solver would require time to get from the starting state to the end state. If it is happening non-iteratively then it would cease to be an optimizing system after it has completed the function, since there is no longer a target configuration.
I’m not sure what you’re trying to say here. What’s the state space (in which both the start and end state of the optimizer live), what’s the basin of attraction (i.e. set of allowed initial conditions), and what’s the target region within the state space? And remember, the target region needs to be a subset of the allowed initial conditions.
This end state state is the solution to the convex function being stored in some physical registers. The initial state is those registers containing arbitrary data to be overwritten. It’s not particularly interesting as optimization problems go (not a very large basin of attraction) but it fulfills the basic criteria.
The unique thing about your example is that it solves once and then it is done (relative to the examples in the post), so it ceases to be an optimizing system once it finishes computing the solution to your convex function.
With a slight modification, you could be repeating this algorithm in a loop so it constantly recalculates a new function. Now the initial state can be some value in the result and input registers, and the target region is the set of input equations and appropriate solution in the output registers. It widens the basin of attraction to both the input and output registers rather than just the output.
Ok, two problems with this:
There’s no reason why that target set would be smaller than the basin of attraction. Given one such optimization problem, there are no obvious perturbations we could make which would leave the result in the target region.
The target region is not a subset of the basin of attraction. The system doesn’t evolve from a larger region to a smaller subset (as in the Venn-diagram visuals in the OP), it just evolves from one set to another.
The first problem explicitly violates the OP’s definition of an optimizer, and the second problem violates one of the unspoken assumptions present in all of the OP’s examples.
I don’t believe that either of these points are true. In your original example, there is one correct solution for any convex function. I will assume there is a single hard-coded function for the following, but it can be extended to work for an arbitrary function.
The output register having the correct solution is the target set.
The output register having any state is the basin of attraction.
Clearly any specific number (or rather singleton of that number) is a subset of all numbers, so the target is a subset of the basin. And further, because “all numbers” has more than one element, the target set is smaller than the basin.
This argument applies to literally any deterministic program with nonempty output. Are you saying that every program is an optimizer?
Pretty much, yes, according to definition given. Like I said, not a particularly interesting optimization but an optimization none the less.
To extend on this, the basin of optimization is not any smaller than an iterative process acting on a single register (and if you loop the program, then the time horizon is the same). In both cases your basin is anything in that register and the target state is one particular number in that register. As far as I can tell the definition doesn’t have any way of saying that one is “more of an optimizer” than the other. If anything, the fixed output is more optimized because it arrives more quickly.
Ok, well, it seems like the one-shot non-iterative optimizer is an optimizer in a MUCH stronger sense than a random program, and I’d still expect a definition of optimization to say something about the sense in which that holds.