“Incoherent entities have stronger reason to become more coherent than less” is very abstract, and if you pass through the abstraction it becomes obvious that it is wrong.
This is about the agent abstraction. The idea is that we can view the behavior of a system as a choice, by considering the situation it is in, the things it “could” do (according to some cartesian frame I guess), and what it then actually does. And then we might want to know if we could say something more compact about the behavior of the system in this way.
In particular, the agent abstraction is interested in whether the systen tries to achieve some goal in the environment. It turns out that if it follows certain intuitively goal-directed rules like always taking a decision and not oscillating around, it must have some well-defined goal.
There’s then the question of what it means to have a reason. Obviously for an agent, it would make sense to treat the goal and things that derive from it as being a reason. For more general systems, I guess you could consider whatever mechanism it acts by to be a reason.
So will every system have a reason to become an agent? That is, will every system regardless of mechanism of action spontaneously change itself to have a goal? And I was about to say that the answer is no, because a rock doesn’t do this. But then there’s the standard point about how any system can be seen as an optimizer by putting a utility of 1 on whatever it does and a utility of 0 on everything else. That’s a bit trivial though and presumably not what you meant.
So the answer in practice is no. Except, then in your post you changed the question from the general “systems” to the specific “creatures”:
you should predict that reasonable creatures will stop doing that if they notice that they are doing it
Creatures a produced by evolution, and those that oscillate endlessly will tend to just go extinct, perhaps by some other creature evolving to exploit them, and perhaps just by wasting energy and getting outcompeted by other creatures that don’t do this.
“Incoherent entities have stronger reason to become more coherent than less” is very abstract, and if you pass through the abstraction it becomes obvious that it is wrong.
This is about the agent abstraction. The idea is that we can view the behavior of a system as a choice, by considering the situation it is in, the things it “could” do (according to some cartesian frame I guess), and what it then actually does. And then we might want to know if we could say something more compact about the behavior of the system in this way.
In particular, the agent abstraction is interested in whether the systen tries to achieve some goal in the environment. It turns out that if it follows certain intuitively goal-directed rules like always taking a decision and not oscillating around, it must have some well-defined goal.
There’s then the question of what it means to have a reason. Obviously for an agent, it would make sense to treat the goal and things that derive from it as being a reason. For more general systems, I guess you could consider whatever mechanism it acts by to be a reason.
So will every system have a reason to become an agent? That is, will every system regardless of mechanism of action spontaneously change itself to have a goal? And I was about to say that the answer is no, because a rock doesn’t do this. But then there’s the standard point about how any system can be seen as an optimizer by putting a utility of 1 on whatever it does and a utility of 0 on everything else. That’s a bit trivial though and presumably not what you meant.
So the answer in practice is no. Except, then in your post you changed the question from the general “systems” to the specific “creatures”:
Creatures a produced by evolution, and those that oscillate endlessly will tend to just go extinct, perhaps by some other creature evolving to exploit them, and perhaps just by wasting energy and getting outcompeted by other creatures that don’t do this.