When taking the digital evolution example, it seems that by “manipulation”, you mean that the tricks used by the programmers at training time did not prevent the behavior they were supposed to prevent. Which fits the intuitive notion of manipulative.
Then, we might see these examples as evidence that even classifiers, arguably the simplest of learning algorithms with the least amount of interaction with the environment, cannot be steered away from maximizing their goal by ad-hoc variations in the training protocol. Is that what your point was?
Somehow, I also see your examples (at least the first one, and the digital evolution one) as indicative that classifiers are robust against manipulations by their programmers. Because these classifiers are trying to maximize their predictive accuracy or their fitness, and the programmers are trying to make them maximize something else. Hence, we can see the behavior of these classifiers as “fault-tolerant”, in a way.
Though this can be a big issue when the target of prediction is not exactly what we wanted, and thus we would like to steer it to something different.
When taking the digital evolution example, it seems that by “manipulation”, you mean that the tricks used by the programmers at training time did not prevent the behavior they were supposed to prevent. Which fits the intuitive notion of manipulative.
Then, we might see these examples as evidence that even classifiers, arguably the simplest of learning algorithms with the least amount of interaction with the environment, cannot be steered away from maximizing their goal by ad-hoc variations in the training protocol. Is that what your point was?
Somehow, I also see your examples (at least the first one, and the digital evolution one) as indicative that classifiers are robust against manipulations by their programmers. Because these classifiers are trying to maximize their predictive accuracy or their fitness, and the programmers are trying to make them maximize something else. Hence, we can see the behavior of these classifiers as “fault-tolerant”, in a way.
Though this can be a big issue when the target of prediction is not exactly what we wanted, and thus we would like to steer it to something different.
That, and the fact these ad-hoc variations can introduce new goals that the programmers are not aware of.