I’m not sure what you mean by “learn to call the programmers” ? As in, in your analogy this sounds similar to reaching an error state… but algorithms are not optimized to reach and error state or to avoid reaching and error state.
You *could*, if you were selecting from loads of algorithms or running the same one many times end up selecting algorithms that reach and error state very often (which we already do, one of the main meta-criteria for any ML algorithm is basically to fail/finish fast), but that’s not necessarily a bad thing.
I’m not sure what you mean by “learn to call the programmers” ? As in, in your analogy this sounds similar to reaching an error state… but algorithms are not optimized to reach and error state or to avoid reaching and error state.
You *could*, if you were selecting from loads of algorithms or running the same one many times end up selecting algorithms that reach and error state very often (which we already do, one of the main meta-criteria for any ML algorithm is basically to fail/finish fast), but that’s not necessarily a bad thing.
If producing (or committing to produce) an error state results in a change in utility / fitness, then it may end up optimised for / against.
Yep!