Thanks for the post, this is my favourite formalisation of optimisation so far!
One concern I haven’t seen raised so far, is that the definition seems very sensitive to the choice of configuration space. As an extreme example, for any given system, I can always augment the configuration space with an arbitrary number of dummy dimensions, and choose the dynamics such that these dummy dimensions always get set to all zero after each time step. Now, I can make the basin of attraction arbitrarily large, while the target configuration set remains a fixed size. This can then make any such dynamical system seem to be an arbitrarily powerful optimiser.
This could perhaps be solved by demanding the configuration space be selected according to Occam’s razor, but I think the outcome still ends up being prior dependent. It’d be nice for two observers who model optimising systems in a systematically different way to always agree within some constant factor, akin to Kolmogorov complexity’s invariance theorem, although this may well be impossible.
As a less facetious example, consider a computer program that repeatedly sets a variable to 0. It seems again we can make the optimising power arbitrarily large by making the variable’s size arbitrarily large. But this doesn’t quite map onto the intuitive notion of the “difficulty” of an optimisation problem. Perhaps including some notion of how many other optimising systems would have the same target set would resolve this.
Thanks for the post, this is my favourite formalisation of optimisation so far!
One concern I haven’t seen raised so far, is that the definition seems very sensitive to the choice of configuration space. As an extreme example, for any given system, I can always augment the configuration space with an arbitrary number of dummy dimensions, and choose the dynamics such that these dummy dimensions always get set to all zero after each time step. Now, I can make the basin of attraction arbitrarily large, while the target configuration set remains a fixed size. This can then make any such dynamical system seem to be an arbitrarily powerful optimiser.
This could perhaps be solved by demanding the configuration space be selected according to Occam’s razor, but I think the outcome still ends up being prior dependent. It’d be nice for two observers who model optimising systems in a systematically different way to always agree within some constant factor, akin to Kolmogorov complexity’s invariance theorem, although this may well be impossible.
As a less facetious example, consider a computer program that repeatedly sets a variable to 0. It seems again we can make the optimising power arbitrarily large by making the variable’s size arbitrarily large. But this doesn’t quite map onto the intuitive notion of the “difficulty” of an optimisation problem. Perhaps including some notion of how many other optimising systems would have the same target set would resolve this.