It’s certainly enlightening, but I have deep seated philosophical objections to it.
For starters, I think it’s too concrete. It insists on considering optimisation as a purely physical phenomenon and even instances of abstract optimisation (e.g. computing the square root of 2) are first translated to physical systems.
I think this is misguided and impoverishes the analysis.
A conception of optimisation most useful for AI alignment I think would be an abstract one. Clearly the concept of optimisation is sound even in universes operating under different “laws of physics”.
It may be the case that the concept of optimisation is not sensible for static systems, but that’s a very different constraint from purely physical systems.
I think optimisation is at its core an abstract/computational phenomenon, and I think it should be possible to give a solid grounding of optimisation as one.
I agree with Alex Altair that optimisation is at its core a decrease in entropy/an increase in negentropy and I think this conception suffices to capture both optimisation in physical systems and optimisation in abstract systems.
I don’t like “The Ground of Optimisation”.
It’s certainly enlightening, but I have deep seated philosophical objections to it.
For starters, I think it’s too concrete. It insists on considering optimisation as a purely physical phenomenon and even instances of abstract optimisation (e.g. computing the square root of 2) are first translated to physical systems.
I think this is misguided and impoverishes the analysis.
A conception of optimisation most useful for AI alignment I think would be an abstract one. Clearly the concept of optimisation is sound even in universes operating under different “laws of physics”.
It may be the case that the concept of optimisation is not sensible for static systems, but that’s a very different constraint from purely physical systems.
I think optimisation is at its core an abstract/computational phenomenon, and I think it should be possible to give a solid grounding of optimisation as one.
I agree with Alex Altair that optimisation is at its core a decrease in entropy/an increase in negentropy and I think this conception suffices to capture both optimisation in physical systems and optimisation in abstract systems.
I would love to read a synthesis of Alex Altair’s “An Introduction to Abstract Entropy” and Alex Flint’s “The Ground of Optimisation”.
Maybe I’ll try writing it myself later this week.
Did/will this happen?
See my other shortform comments on optimisation.
I did start the project, but it’s currently paused.
I have exams ongoing.
I’ll probably pick it up again, later.