Water flowing downhill is an optimisation process. Do you mind telling me what does that optimise? In other words, what is the objective function? Water flowing downhill because of gravity. It needs not optimise anything.
Of course, certain intrinsic properties may make some non-living things survive better than other (long half lives, water resistance, etc). But you don’t need to give them any objective as though they have a mind. When you say ‘optimisation,’ you ascribe one more objective to something within a set of constraints, and by doing this you imply that there are some objectives that are more ‘desirable’ than others.
I understand that it is just the human mind that makes judgment about ‘desirability’. Yes, I’m suggesting that your views are rather anthropomorphic.
1) Systems do collapse (political systems collapse due to wars, lack of social capital, etc; financial systems collapse due to mismanagement, failure of the invisible hand; the earth may collapse due to anthropogenic climate change; stars do explode). And this means optimisation, if any, fails. If you want to argue that systems collapse in order to optimise larger systems, please come up with some system-design explanations. I believe that a good optimisation process in a well-designed system is one-directional, at least in the short run. You don’t destroy a building to recreate it very soon later unless you have bad design or miscalculation of requirements. But the nature is sometimes stupid enough to destroy a forest in a flash and recreate something very similar several years later.
2) An optimal solution should be preventive rather than corrective. If the objective function of the whole world is ecological stability, then maybe humans shouldn’t be intelligent enough to think and invent something that harms the environment. And maybe there shouldn’t be stuffs like bush fires in forests that take a century to regrow. Or oil spill that kills planktons. Those things hurt more than benefit the environment. What do those events optimise? Please let me know.
3) The fluctuation theorem, the Gaia hypothesis, etc. are kind of depicting self-regulating systems. (Natural selection is not. Some species adapt better than others and this may be destructive in the long run.) And self-regulating systems are not necessarily self-optimising unless the objective function is definable, defined and maximised when the equilibrium state is reached. And if there are multiple possible equilibria in the system, self-regulating systems may get stuck at a non-optimal equilibrium. I’m not talking about the thermodynamic equilibrium here. I’m talking about systems in general (young democracies seem to be good examples).
4) I don’t see a link between evolution and systematic optimisation. Evolution is locally, a greedy algorithm. In computer science, greedy algorithm doesn’t normally give the best results. It can give the worst possible result, indeed. Moreover, organisms adapt for themselves, not for the system. They optimise their survival probability (though the process is kind of slow), and this could bring the ecology from balance to imbalance. This could eventually harm the adapted species themselves.
5) I’m not sure whether the Newcomb’s problem is sorta contradict the natural selection, if applied to computer systems. In this environment, AI that chooses options randomly would fare better than intelligent AI that understands strategic dominance in game theory.
I’ve never seen an academic article saying that the world is maximising entropy (in the thermodynamic sense). I understand that the second law of thermodynamics hints that entropy is a fairly closed system should increase over time.
When a process (rather) consistently increases (or decreases) the value of a variable, it doesn’t necessarily optimise it! Like when you see a nation’s positive GDP growth from year to year, you can’t say the nation is optimising its GDP. It is tempting, but still it is not a sufficient condition to say it is an optimisation process.
In an optimisation problem, there is an objective function and, often, a set of constraints. You are trying to find the best solution from all possible solutions. The objective function itself reveals preferences (‘best’ solution—isn’t that subjective?), and this is sometimes inherent, sometimes explicit.
I use the word ‘optimisation’ in its mathematical sense. And I know the difference between definitions and axioms. Objective functions are definitions, not axioms. You can’t take them as facts! In an optimisation problem, you start with an objective function given a set of constraints, and then you arrive at an optimal solution and work it out. This is the real optimisation process. You, on the other hand, observe a phenomenon, and then explain it by giving it an objective function as a theory… although the phenomenon isn’t efficient in giving the optimal outcome.
Suppose one day you observe the global economy. You see the trend that global production, in real terms, is increasing. Can you conclude that the world’s economy is an optimisation process of output? No! It is just a candidate story, not fact.
Definitely not facts.
The Gaia hypothesis is the way some biologists see how the world works. “Optimising Gaia” is a story. The strongest hypothesis among Gaia hypotheses. It is like Earth has a mind and tries to adjust herself to be biologically favourable (the objective function here is ecological). Regardless, the truth remains. All versions of the Gaia hypotheses are maps, not territories.