I find this very unsatisfying, not least because the optimisation power over a wide range of targets is easily gamed just by dividing any given ‘target’ of a process into a whole lot of smaller targets and then saying “look at all these different targets that the process optimised for!”
Claiming that optimisation power is defined simply by a process’s ability to hit some target from a wide range of starting states, and/or has a wide range of targets that it can hit, both seem to be easily gameable by clever sophistry with your choice of how you choose the targets by which you measure its optimisation power. There must be some part of it that separates processes we feel genuinely are good at optimising (like Clippy) from processes that only come out as good at optimising if we select clever targets to measure them by.
We seem to have very different understandings of what constitutes a wide range. A narrow target does not suddenly become a wide range of targets because I choose to subdivide it, any more than I can achieve a diversified stock portfolio by separately investing each dollar into the same company’s stock.
So I’m still pretty comfortable with my original stance here: optimization is as optimization does.
That said, I certainly agree that clever sophistry can blur the meaning of our definitions. This seems like a good reason to eschew clever sophistry when analyzing systems I want to interact effectively with.
And I can appreciate finding it unsatisfying. Sometimes the result of careful thinking about a system is that we discover our initial intuitions were incorrect, rather than discovering a more precise or compelling way to express our initial intuitions.
There must be some part of it that separates processes we feel genuinely are good at optimising (like Clippy)
I’m not really sure what you mean by “part” here. But general-purpose optimizers are more interesting than narrow optimizers, and powerful optimizers are more interesting than less powerful optimizers, and if we want to get at what’s interesting about them we need more and better tools than just the definition of an “optimization process”.
I find this very unsatisfying, not least because the optimisation power over a wide range of targets is easily gamed just by dividing any given ‘target’ of a process into a whole lot of smaller targets and then saying “look at all these different targets that the process optimised for!”
Claiming that optimisation power is defined simply by a process’s ability to hit some target from a wide range of starting states, and/or has a wide range of targets that it can hit, both seem to be easily gameable by clever sophistry with your choice of how you choose the targets by which you measure its optimisation power. There must be some part of it that separates processes we feel genuinely are good at optimising (like Clippy) from processes that only come out as good at optimising if we select clever targets to measure them by.
We seem to have very different understandings of what constitutes a wide range. A narrow target does not suddenly become a wide range of targets because I choose to subdivide it, any more than I can achieve a diversified stock portfolio by separately investing each dollar into the same company’s stock.
So I’m still pretty comfortable with my original stance here: optimization is as optimization does.
That said, I certainly agree that clever sophistry can blur the meaning of our definitions. This seems like a good reason to eschew clever sophistry when analyzing systems I want to interact effectively with.
And I can appreciate finding it unsatisfying. Sometimes the result of careful thinking about a system is that we discover our initial intuitions were incorrect, rather than discovering a more precise or compelling way to express our initial intuitions.
I’m not really sure what you mean by “part” here. But general-purpose optimizers are more interesting than narrow optimizers, and powerful optimizers are more interesting than less powerful optimizers, and if we want to get at what’s interesting about them we need more and better tools than just the definition of an “optimization process”.