Moreover, the “adversary” need not be a human actor searching deliberately: a search for mistakes can happen unintentionally any time a selection process with adverse incentives is applied. (Such as testing thousands of inputs to find which ones get the most clicks or earn the most money).
Is there a post or paper which talks about this in more detail?
I understand optimizing for an imperfect measurement, but it’s not clear to me if/how this is linked to small perturbation adversarial examples beyond general handwaving about the deficiencies of machine learning.
Is there a post or paper which talks about this in more detail?
I understand optimizing for an imperfect measurement, but it’s not clear to me if/how this is linked to small perturbation adversarial examples beyond general handwaving about the deficiencies of machine learning.