This seems great, I’ll read and comment more thoroughly later. Two quick comments:
It didn’t seem like you defined what it meant to evolve towards the target configuration set. So it seems like either you need to commit to the system actually reaching one of the target configurations to call it an optimiser, or you need some sort of metric over the configuration space to tell whether it’s getting closer to or further away from the target configuration set. But if you’re ranking all configurations anyway, then I’m not sure it adds anything to draw a binary distinction between target configurations and all the others. In other words, can’t you keep the definition in terms of a utility function, but just add perturbations?
Also, you don’t cite Dennett here, but his definition has some important similarities. In particular, he defines several different types of perturbation (such as random perturbations, adversarial perturbations, etc) and says that a system is more agentic when it can withstand more types of perturbations. Can’t remember exactly where this is from—perhaps The Intentional Stance?
It didn’t seem like you defined what it meant to evolve towards the target configuration set.
+1 for swapping out the target configuration set with a utility function, and looking for a robust tendency for the utility function to increase. This would also let you express mild optimization (see this thread).
It would work at least as well as the original proposal, because your utility function could just be whatever metric of “getting closer to the target states” would be used in the original proposal.
This seems great, I’ll read and comment more thoroughly later. Two quick comments:
It didn’t seem like you defined what it meant to evolve towards the target configuration set. So it seems like either you need to commit to the system actually reaching one of the target configurations to call it an optimiser, or you need some sort of metric over the configuration space to tell whether it’s getting closer to or further away from the target configuration set. But if you’re ranking all configurations anyway, then I’m not sure it adds anything to draw a binary distinction between target configurations and all the others. In other words, can’t you keep the definition in terms of a utility function, but just add perturbations?
Also, you don’t cite Dennett here, but his definition has some important similarities. In particular, he defines several different types of perturbation (such as random perturbations, adversarial perturbations, etc) and says that a system is more agentic when it can withstand more types of perturbations. Can’t remember exactly where this is from—perhaps The Intentional Stance?
+1 for swapping out the target configuration set with a utility function, and looking for a robust tendency for the utility function to increase. This would also let you express mild optimization (see this thread).
Would this work for highly non-monotonic utility functions?
It would work at least as well as the original proposal, because your utility function could just be whatever metric of “getting closer to the target states” would be used in the original proposal.