There is no such thing as an optimizer except in the mind of a human anthropomorphizing that entity. I wrote about it some time ago. Let me quote, sorry it is long. One can replace “agent” with “optimizer” in the following.
… Something like a bacterium. From the human point of view, it is alive, and has certain elements of agency, like the need to feed, which is satisfies by, say, moving up a sugar gradient toward richer food sources so it can grow. It also divides once it is mature enough, or reached a certain size. It can die eventually, after multiple generations, and so on.
The above is a very simplified black-box description of bacteria, but still enough to make at least some humans care to preserve it as a life form, instead of coldly getting rid of it and reusing the material for something else. Where does this compassion for life come from? I contend that it comes from the lack of knowledge about the inner workings of the “agent” and consequently lack of ability to reproduce it when desired.
I give a simple example to demonstrate how lack of knowledge makes something look “alive” or “agenty” to us and elicits emotional reactions such as empathy and compassion. Enter
Bubbles!
Let’s take a… pot of boiling water. If you don’t have an immediate intuitive picture of it in mind, here is a sample video. Those bubbles look almost alive, don’t they? They are born, they travel along a gradient of water pressure to get larger, while changing shape rather chaotically, they split apart once they grow big enough, they merge sometimes, and they die eventually when reaching the surface. Just like a bacteria.
So, a black-box description of bubbles is almost indistinguishable from a black-box description of something that is conventionally considered alive. Yet few people feel compelled to protect bubbles, say, by adding more water and keeping the stove on, and have no qualms whatsoever to turn off the boiler and letting the bubbles “die”. How come?
There are some related immediate intuitive explanations for it:
We know “how the bubbles work” — it’s just water vapor after all! The physics is known, and the water boiling process can be numerically simulated from the relevant physics equations.
We know how to make bubbles at will — just pour some water into a pot and turn the stove on.
We don’t empathize with bubbles as potentially experience suffering, something we may when observing, say, a bacteria writhe and try to escape when encountering an irritant.
We see all bubbles as basically identical, with no individuality, so a disappearing bubble does not feel like a loss of something unique.
Thus something whose inner workings we understand down to the physical level and can reproduce at will without loss of anything “important” no longer feels like an agent. This may seem rather controversial. Say, you poke a worm and it wriggles and squirms, and we immediately anthropomorphize this observation and compare it to human suffering in similar circumstances. Were we to understand the biology and the physics of the worm, we may have concluded that the reactions are more like that of a wiggling bubble than that of a poked human, assuming the brain structure producing the quale “suffering” does not have an analog in the worm’s cerebral ganglion. Alternatively, we might conclude that worms do have a similar structure, producing suffering when interacted with a certain way, and end up potentially extending human morals to cover worms, or maybe also bacteria. Or even bubbles.
I claim that this is wrong: I can understand down to the physical level and reproduce at will something which implements the UCB algorithm, and it still seems like an optimisation algorithm to me.
Hmm, I don’t have a good understanding of this algorithm, from your link I gather that this is still an agent who follows the algorithm, not a physical system without an agent anywhere in there, like, say, a chess bot. But it could be my misunderstanding.
There is no such thing as an optimizer except in the mind of a human anthropomorphizing that entity. I wrote about it some time ago. Let me quote, sorry it is long. One can replace “agent” with “optimizer” in the following.
Is there some other set of concepts that don’t exist only in the human mind?
I claim that this is wrong: I can understand down to the physical level and reproduce at will something which implements the UCB algorithm, and it still seems like an optimisation algorithm to me.
Hmm, I don’t have a good understanding of this algorithm, from your link I gather that this is still an agent who follows the algorithm, not a physical system without an agent anywhere in there, like, say, a chess bot. But it could be my misunderstanding.