They will thus be more successful in reaching some situation S than an incoherent counterpart would be.
This is only the case if either the cost of being coherent is negligible or if the depth of said search tree is very high.
If I have a coherent[1] agent that takes 1 unit of time / step, or an agent that’s incoherent 1% of steps but takes 0.99 units of time / step, the incoherent agent wins on average up to a depth of <=68.
(Now: once you have a coherent agent that can exploit incoherent agents, suddenly the straight probabilistic argument no longer applies. But that’s assuming that said coherent agent can evolve in the first place.)
In ‘reality’ all agents are incoherent, as they have non-zero error probability per step. But you can certainly push this probability down substantially.
This is only the case if either the cost of being coherent is negligible or if the depth of said search tree is very high.
If I have a coherent[1] agent that takes 1 unit of time / step, or an agent that’s incoherent 1% of steps but takes 0.99 units of time / step, the incoherent agent wins on average up to a depth of <=68.
(Now: once you have a coherent agent that can exploit incoherent agents, suddenly the straight probabilistic argument no longer applies. But that’s assuming that said coherent agent can evolve in the first place.)
In ‘reality’ all agents are incoherent, as they have non-zero error probability per step. But you can certainly push this probability down substantially.