You can do gradient descent (optimisation) on arbitrary 1D / 2D functions with it—and adding more dimensions is not that conceptually challenging.
I am not sure what optimisation problem can’t easily have cold water poured on it ;-)
Also, “retargetability” sounds as though it is your own specification.
I don’t see much about being “retargetable” here. So, it seems as though this is not a standard concern. If you wish to continue to claim that “retargetability” is to do with optimisation, I think you should provide a supporting reference.
FWIW, optimisation implies quite a bit more than just monotonic increase. You get a monotonic increase from 2LoT—which is a different idea, with less to do with the concept of optimisation. The idea of “maximising entropy” constrains expectations a lot more than the second law alone does.
You can do gradient descent (optimisation) on arbitrary 1D / 2D functions with it—and adding more dimensions is not that conceptually challenging.
I am not sure what optimisation problem can’t easily have cold water poured on it ;-)
Also, “retargetability” sounds as though it is your own specification.
I don’t see much about being “retargetable” here. So, it seems as though this is not a standard concern. If you wish to continue to claim that “retargetability” is to do with optimisation, I think you should provide a supporting reference.
FWIW, optimisation implies quite a bit more than just monotonic increase. You get a monotonic increase from 2LoT—which is a different idea, with less to do with the concept of optimisation. The idea of “maximising entropy” constrains expectations a lot more than the second law alone does.